1,837 research outputs found
A Survey on Forensics and Compliance Auditing for Critical Infrastructure Protection
The broadening dependency and reliance that modern societies have on essential services
provided by Critical Infrastructures is increasing the relevance of their trustworthiness. However, Critical
Infrastructures are attractive targets for cyberattacks, due to the potential for considerable impact, not just
at the economic level but also in terms of physical damage and even loss of human life. Complementing
traditional security mechanisms, forensics and compliance audit processes play an important role in ensuring
Critical Infrastructure trustworthiness. Compliance auditing contributes to checking if security measures are
in place and compliant with standards and internal policies. Forensics assist the investigation of past security
incidents. Since these two areas significantly overlap, in terms of data sources, tools and techniques, they can
be merged into unified Forensics and Compliance Auditing (FCA) frameworks. In this paper, we survey the
latest developments, methodologies, challenges, and solutions addressing forensics and compliance auditing
in the scope of Critical Infrastructure Protection. This survey focuses on relevant contributions, capable of
tackling the requirements imposed by massively distributed and complex Industrial Automation and Control
Systems, in terms of handling large volumes of heterogeneous data (that can be noisy, ambiguous, and
redundant) for analytic purposes, with adequate performance and reliability. The achieved results produced
a taxonomy in the field of FCA whose key categories denote the relevant topics in the literature. Also, the
collected knowledge resulted in the establishment of a reference FCA architecture, proposed as a generic
template for a converged platform. These results are intended to guide future research on forensics and
compliance auditing for Critical Infrastructure Protection.info:eu-repo/semantics/publishedVersio
Design of new algorithms for gene network reconstruction applied to in silico modeling of biomedical data
Programa de Doctorado en BiotecnologĂa, IngenierĂa y TecnologĂa QuĂmicaLĂnea de InvestigaciĂłn: IngenierĂa, Ciencia de Datos y BioinformáticaClave Programa: DBICĂłdigo LĂnea: 111The root causes of disease are still poorly understood. The success of current therapies is limited because persistent diseases are frequently treated based on their symptoms rather than the underlying cause of the disease. Therefore, biomedical research is experiencing a technology-driven shift to data-driven holistic approaches to better characterize the molecular mechanisms causing disease. Using omics data as an input, emerging disciplines like network biology attempt to model the relationships between biomolecules. To this effect, gene co- expression networks arise as a promising tool for deciphering the relationships between genes in large transcriptomic datasets. However, because of their low specificity and high false positive rate, they demonstrate a limited capacity to retrieve the disrupted mechanisms that lead to disease onset, progression, and maintenance. Within the context of statistical modeling, we dove deeper into the reconstruction of gene co-expression networks with the specific goal of discovering disease-specific features directly from expression data. Using ensemble techniques, which combine the results of various metrics, we were able to more precisely capture biologically significant relationships between genes. We were able to find de novo potential disease-specific features with the help of prior biological knowledge and the development of new network inference techniques.
Through our different approaches, we analyzed large gene sets across multiple samples and used gene expression as a surrogate marker for the inherent biological processes, reconstructing robust gene co-expression networks that are simple to explore. By mining disease-specific gene co-expression networks we come up with a useful framework for identifying new omics-phenotype associations from conditional expression datasets.In this sense, understanding diseases from the perspective of biological network perturbations will improve personalized medicine, impacting rational biomarker discovery, patient stratification and drug design, and ultimately leading to more targeted therapies.Universidad Pablo de Olavide de Sevilla. Departamento de Deporte e Informátic
An Ensemble Semi-Supervised Adaptive Resonance Theory Model with Explanation Capability for Pattern Classification
Most semi-supervised learning (SSL) models entail complex structures and
iterative training processes as well as face difficulties in interpreting their
predictions to users. To address these issues, this paper proposes a new
interpretable SSL model using the supervised and unsupervised Adaptive
Resonance Theory (ART) family of networks, which is denoted as SSL-ART.
Firstly, SSL-ART adopts an unsupervised fuzzy ART network to create a number of
prototype nodes using unlabeled samples. Then, it leverages a supervised fuzzy
ARTMAP structure to map the established prototype nodes to the target classes
using labeled samples. Specifically, a one-to-many (OtM) mapping scheme is
devised to associate a prototype node with more than one class label. The main
advantages of SSL-ART include the capability of: (i) performing online
learning, (ii) reducing the number of redundant prototype nodes through the OtM
mapping scheme and minimizing the effects of noisy samples, and (iii) providing
an explanation facility for users to interpret the predicted outcomes. In
addition, a weighted voting strategy is introduced to form an ensemble SSL-ART
model, which is denoted as WESSL-ART. Every ensemble member, i.e., SSL-ART,
assigns {\color{black}a different weight} to each class based on its
performance pertaining to the corresponding class. The aim is to mitigate the
effects of training data sequences on all SSL-ART members and improve the
overall performance of WESSL-ART. The experimental results on eighteen
benchmark data sets, three artificially generated data sets, and a real-world
case study indicate the benefits of the proposed SSL-ART and WESSL-ART models
for tackling pattern classification problems.Comment: 13 pages, 8 figure
A Framework for Meta-heuristic Parameter Performance Prediction Using Fitness Landscape Analysis and Machine Learning
The behaviour of an optimization algorithm when attempting to solve a problem depends on the values assigned to its control parameters. For an algorithm to obtain desirable performance, its control parameter values must be chosen based on the current problem. Despite being necessary for optimal performance, selecting appropriate control parameter values is time-consuming, computationally expensive, and challenging. As the quantity of control parameters increases, so does the time complexity associated with searching for practical values, which often overshadows addressing the problem at hand, limiting the efficiency of an algorithm. As primarily recognized by the no free lunch theorem, there is no one-size-fits-all to problem-solving; hence from understanding a problem, a tailored approach can substantially help solve it.
To predict the performance of control parameter configurations in unseen environments, this thesis crafts an intelligent generalizable framework leveraging machine learning classification and quantitative characteristics about the problem in question. The proposed parameter performance classifier (PPC) framework is extensively explored by training 84 high-accuracy classifiers comprised of multiple sampling methods, fitness types, and binning strategies. Furthermore, the novel framework is utilized in constructing a new parameter-free particle swarm optimization (PSO) variant called PPC-PSO that effectively eliminates the computational cost of parameter tuning, yields competitive performance amongst other leading methodologies across 99 benchmark functions, and is highly accessible to researchers and practitioners. The success of PPC-PSO shows excellent promise for the applicability of the PPC framework in making many more robust parameter-free meta-heuristic algorithms in the future with incredible generalization capabilities
Development of an R package to learn supervised classification techniques
This TFG aims to develop a custom R package for teaching supervised classification algorithms, starting
with the identification of requirements, including algorithms, data structures, and libraries. A strong
theoretical foundation is essential for effective package design. Documentation will explain each function’s
purpose, accompanied by necessary paperwork.
The package will include R scripts and data files in organized directories, complemented by a user
manual for easy installation and usage, even for beginners. Built entirely from scratch without external
dependencies, it’s optimized for accuracy and performance.
In conclusion, this TFG provides a roadmap for creating an R package to teach supervised classification
algorithms, benefiting researchers and practitioners dealing with real-world challenges.Grado en IngenierĂa Informátic
Advanced analytical methods for fraud detection: a systematic literature review
The developments of the digital era demand new ways of producing goods and rendering
services. This fast-paced evolution in the companies implies a new approach from the
auditors, who must keep up with the constant transformation. With the dynamic
dimensions of data, it is important to seize the opportunity to add value to the companies.
The need to apply more robust methods to detect fraud is evident.
In this thesis the use of advanced analytical methods for fraud detection will be
investigated, through the analysis of the existent literature on this topic.
Both a systematic review of the literature and a bibliometric approach will be applied to
the most appropriate database to measure the scientific production and current trends.
This study intends to contribute to the academic research that have been conducted, in
order to centralize the existing information on this topic
Measuring the impact of COVID-19 on hospital care pathways
Care pathways in hospitals around the world reported significant disruption during the recent COVID-19 pandemic but measuring the actual impact is more problematic. Process mining can be useful for hospital management to measure the conformance of real-life care to what might be considered normal operations. In this study, we aim to demonstrate that process mining can be used to investigate process changes associated with complex disruptive events. We studied perturbations to accident and emergency (A &E) and maternity pathways in a UK public hospital during the COVID-19 pandemic. Co-incidentally the hospital had implemented a Command Centre approach for patient-flow management affording an opportunity to study both the planned improvement and the disruption due to the pandemic. Our study proposes and demonstrates a method for measuring and investigating the impact of such planned and unplanned disruptions affecting hospital care pathways. We found that during the pandemic, both A &E and maternity pathways had measurable reductions in the mean length of stay and a measurable drop in the percentage of pathways conforming to normative models. There were no distinctive patterns of monthly mean values of length of stay nor conformance throughout the phases of the installation of the hospital’s new Command Centre approach. Due to a deficit in the available A &E data, the findings for A &E pathways could not be interpreted
- …