91 research outputs found

    An evolutionary model to mine high expected utility patterns from uncertain databases

    Get PDF
    In recent decades, mobile or the Internet of Thing (IoT) devices are dramatically increasing in many domains and applications. Thus, a massive amount of data is generated and produced. Those collected data contain a large amount of interesting information (i.e., interestingness, weight, frequency, or uncertainty), and most of the existing and generic algorithms in pattern mining only consider the single object and precise data to discover the required information. Meanwhile, since the collected information is huge, and it is necessary to discover meaningful and up-to-date information in a limit and particular time. In this paper, we consider both utility and uncertainty as the majority objects to efficiently mine the interesting high expected utility patterns (HEUPs) in a limit time based on the multi-objective evolutionary framework. The benefits of the designed model (called MOEA-HEUPM) can discover the valuable HEUPs without pre-defined threshold values (i.e., minimum utility and minimum uncertainty) in the uncertain environment. Two encoding methodologies are also considered in the developed MOEA-HEUPM to show its effectiveness. Based on the developed MOEA-HEUPM model, the set of non-dominated HEUPs can be discovered in a limit time for decision-making. Experiments are then conducted to show the effectiveness and efficiency of the designed MOEA-HEUPM model in terms of convergence, hypervolume and number of the discovered patterns compared to the generic approaches.acceptedVersio

    Temporal Information in Data Science: An Integrated Framework and its Applications

    Get PDF
    Data science is a well-known buzzword, that is in fact composed of two distinct keywords, i.e., data and science. Data itself is of great importance: each analysis task begins from a set of examples. Based on such a consideration, the present work starts with the analysis of a real case scenario, by considering the development of a data warehouse-based decision support system for an Italian contact center company. Then, relying on the information collected in the developed system, a set of machine learning-based analysis tasks have been developed to answer specific business questions, such as employee work anomaly detection and automatic call classification. Although such initial applications rely on already available algorithms, as we shall see, some clever analysis workflows had also to be developed. Afterwards, continuously driven by real data and real world applications, we turned ourselves to the question of how to handle temporal information within classical decision tree models. Our research brought us the development of J48SS, a decision tree induction algorithm based on Quinlan's C4.5 learner, which is capable of dealing with temporal (e.g., sequential and time series) as well as atemporal (such as numerical and categorical) data during the same execution cycle. The decision tree has been applied into some real world analysis tasks, proving its worthiness. A key characteristic of J48SS is its interpretability, an aspect that we specifically addressed through the study of an evolutionary-based decision tree pruning technique. Next, since a lot of work concerning the management of temporal information has already been done in automated reasoning and formal verification fields, a natural direction in which to proceed was that of investigating how such solutions may be combined with machine learning, following two main tracks. First, we show, through the development of an enriched decision tree capable of encoding temporal information by means of interval temporal logic formulas, how a machine learning algorithm can successfully exploit temporal logic to perform data analysis. Then, we focus on the opposite direction, i.e., that of employing machine learning techniques to generate temporal logic formulas, considering a natural language processing scenario. Finally, as a conclusive development, the architecture of a system is proposed, in which formal methods and machine learning techniques are seamlessly combined to perform anomaly detection and predictive maintenance tasks. Such an integration represents an original, thrilling research direction that may open up new ways of dealing with complex, real-world problems.Data science is a well-known buzzword, that is in fact composed of two distinct keywords, i.e., data and science. Data itself is of great importance: each analysis task begins from a set of examples. Based on such a consideration, the present work starts with the analysis of a real case scenario, by considering the development of a data warehouse-based decision support system for an Italian contact center company. Then, relying on the information collected in the developed system, a set of machine learning-based analysis tasks have been developed to answer specific business questions, such as employee work anomaly detection and automatic call classification. Although such initial applications rely on already available algorithms, as we shall see, some clever analysis workflows had also to be developed. Afterwards, continuously driven by real data and real world applications, we turned ourselves to the question of how to handle temporal information within classical decision tree models. Our research brought us the development of J48SS, a decision tree induction algorithm based on Quinlan's C4.5 learner, which is capable of dealing with temporal (e.g., sequential and time series) as well as atemporal (such as numerical and categorical) data during the same execution cycle. The decision tree has been applied into some real world analysis tasks, proving its worthiness. A key characteristic of J48SS is its interpretability, an aspect that we specifically addressed through the study of an evolutionary-based decision tree pruning technique. Next, since a lot of work concerning the management of temporal information has already been done in automated reasoning and formal verification fields, a natural direction in which to proceed was that of investigating how such solutions may be combined with machine learning, following two main tracks. First, we show, through the development of an enriched decision tree capable of encoding temporal information by means of interval temporal logic formulas, how a machine learning algorithm can successfully exploit temporal logic to perform data analysis. Then, we focus on the opposite direction, i.e., that of employing machine learning techniques to generate temporal logic formulas, considering a natural language processing scenario. Finally, as a conclusive development, the architecture of a system is proposed, in which formal methods and machine learning techniques are seamlessly combined to perform anomaly detection and predictive maintenance tasks. Such an integration represents an original, thrilling research direction that may open up new ways of dealing with complex, real-world problems

    A systematic literature review

    Get PDF
    Bahaa, A., Abdelaziz, A., Sayed, A., Elfangary, L., & Fahmy, H. (2021). Monitoring real time security attacks for iot systems using devsecops: A systematic literature review. Information (Switzerland), 12(4), 1-23. [154]. https://doi.org/10.3390/info12040154In many enterprises and the private sector, the Internet of Things (IoT) has spread globally. The growing number of different devices connected to the IoT and their various protocols have contributed to the increasing number of attacks, such as denial-of-service (DoS) and remote-to-local (R2L) ones. There are several approaches and techniques that can be used to construct attack detection models, such as machine learning, data mining, and statistical analysis. Nowadays, this technique is commonly used because it can provide precise analysis and results. Therefore, we decided to study the previous literature on the detection of IoT attacks and machine learning in order to understand the process of creating detection models. We also evaluated various datasets used for the models, IoT attack types, independent variables used for the models, evaluation metrics for assessment of models, and monitoring infrastructure using DevSecOps pipelines. We found 49 primary studies, and the detection models were developed using seven different types of machine learning techniques. Most primary studies used IoT device testbed datasets, and others used public datasets such as NSL-KDD and UNSW-NB15. When it comes to measuring the efficiency of models, both numerical and graphical measures are commonly used. Most IoT attacks occur at the network layer according to the literature. If the detection models applied DevSecOps pipelines in development processes for IoT devices, they were more secure. From the results of this paper, we found that machine learning techniques can detect IoT attacks, but there are a few issues in the design of detection models. We also recommend the continued use of hybrid frameworks for the improved detection of IoT attacks, advanced monitoring infrastructure configurations using methods based on software pipelines, and the use of machine learning techniques for advanced supervision and monitoring.publishersversionpublishe

    Applied Metaheuristic Computing

    Get PDF
    For decades, Applied Metaheuristic Computing (AMC) has been a prevailing optimization technique for tackling perplexing engineering and business problems, such as scheduling, routing, ordering, bin packing, assignment, facility layout planning, among others. This is partly because the classic exact methods are constrained with prior assumptions, and partly due to the heuristics being problem-dependent and lacking generalization. AMC, on the contrary, guides the course of low-level heuristics to search beyond the local optimality, which impairs the capability of traditional computation methods. This topic series has collected quality papers proposing cutting-edge methodology and innovative applications which drive the advances of AMC

    Understanding Optimisation Processes with Biologically-Inspired Visualisations

    Get PDF
    Evolutionary algorithms (EAs) constitute a branch of artificial intelligence utilised to evolve solutions to solve optimisation problems abound in industry and research. EAs often generate many solutions and visualisation has been a primary strategy to display EA solutions, given that visualisation is a multi-domain well-evaluated medium to comprehend extensive data. The endeavour of visualising solutions is inherent with challenges resulting from high dimensional phenomenons and the large number of solutions to display. Recently, scholars have produced methods to mitigate some of these known issues when illustrating solutions. However, one key consideration is that displaying the final subset of solutions exclusively (rather than the whole population) discards most of the informativeness of the search, creating inadequate insight into the black-box EA. There is an unequivocal knowledge gap and requirement for methods which can visualise the whole population of solutions from an optimiser and subjugate the high-dimensional problems and scaling issues to create interpretability of the EA search process. Furthermore, a requirement for explainability in evolutionary computing has been demanded by the evolutionary computing community, which could take the form of visualisations, to support EA comprehension much like the support explainable artificial intelligence has brought to artificial intelligence. In this thesis, we report novel visualisation methods that can be used to visualise large and high-dimensional optimiser populations with the aim of creating greater interpretability during a search. We consider the nascent intersection of visualisation and explainability in evolutionary computing. The potential high informativeness of a visualisation method from an early chapter of this work forms an effective platform to develop an explainability visualisation method, namely the population dynamics plot, to attempt to inject explainability into the inner workings of the search process. We further support the visualisation of populations using machine learning to construct models which can capture the characteristics of an EA search and develop intelligent visualisations which use artificial intelligence to potentially enhance and support visualisation for a more informative search process. The methods developed in this thesis are evaluated both quantitatively and qualitatively. We use multi-feature benchmark problems to show the method’s ability to reveal specific problem characteristics such as disconnected fronts, local optima and bias, as well as potentially creating a better understanding of the problem landscape and optimiser search for evaluating and comparing algorithm performance (we show the visualisation method to be more insightful than conventional metrics like hypervolume alone). One of the most insightful methods developed in this thesis can produce a visualisation requiring less than 1% of the time and memory necessary to produce a visualisation of the same objective space solutions using existing methods. This allows for greater scalability and the use in short compile time applications such as online visualisations. Predicated by an existing visualisation method in this thesis, we then develop and apply an explainability method to a real-world problem and evaluate it to show the method to be highly effective at explaining the search via solutions in the objective spaces, solution lineage and solution variation operators to compactly comprehend, evaluate and communicate the search of an optimiser, although we note the explainability properties are only evaluated against the author’s ability and could be evaluated further in future work with a usability study. The work is then supported by the development of intelligent visualisation models that may allow one to predict solutions in optima (importantly local optima) in unseen problems by using a machine learning model. The results are effective, with some models able to predict and visualise solution optima with a balanced F1 accuracy metric of 96%. The results of this thesis provide a suite of visualisations which aims to provide greater informativeness of the search and scalability than previously existing literature. The work develops one of the first explainability methods aiming to create greater insight into the search space, solution lineage and reproductive operators. The work applies machine learning to potentially enhance EA understanding via visualisation. These models could also be used for a number of applications outside visualisation. Ultimately, the work provides novel methods for all EA stakeholders which aims to support understanding, evaluation and communication of EA processes with visualisation

    A Step Toward Improving Healthcare Information Integration & Decision Support: Ontology, Sustainability and Resilience

    Get PDF
    The healthcare industry is a complex system with numerous stakeholders, including patients, providers, insurers, and government agencies. To improve healthcare quality and population well-being, there is a growing need to leverage data and IT (Information Technology) to support better decision-making. Healthcare information systems (HIS) are developed to store, process, and disseminate healthcare data. One of the main challenges with HIS is effectively managing the large amounts of data to support decision-making. This requires integrating data from disparate sources, such as electronic health records, clinical trials, and research databases. Ontology is one approach to address this challenge. However, understanding ontology in the healthcare domain is complex and difficult. Another challenge is to use HIS on scheduling and resource allocation in a sustainable and resilient way that meets multiple conflicting objectives. This is especially important in times of crisis when demand for resources may be high, and supply may be limited. This research thesis aims to explore ontology theory and develop a methodology for constructing HIS that can effectively support better decision-making in terms of scheduling and resource allocation while considering system resiliency and social sustainability. The objectives of the thesis are: (1) studying the theory of ontology in healthcare data and developing a deep model for constructing HIS; (2) advancing our understanding of healthcare system resiliency and social sustainability; (3) developing a methodology for scheduling with multi-objectives; and (4) developing a methodology for resource allocation with multi-objectives. The following conclusions can be drawn from the research results: (1) A data model for rich semantics and easy data integration can be created with a clearer definition of the scope and applicability of ontology; (2) A healthcare system's resilience and sustainability can be significantly increased by the suggested design principles; (3) Through careful consideration of both efficiency and patients' experiences and a novel optimization algorithm, a scheduling problem can be made more patient-accessible; (4) A systematic approach to evaluating efficiency, sustainability, and resilience enables the simultaneous optimization of all three criteria at the system design stage, leading to more efficient distributions of resources and locations for healthcare facilities. The contributions of the thesis can be summarized as follows. Scientifically, this thesis work has expanded our knowledge of ontology and data modelling, as well as our comprehension of the healthcare system's resilience and sustainability. Technologically or methodologically, the work has advanced the state of knowledge for system modelling and decision-making. Overall, this thesis examines the characteristics of healthcare systems from a system viewpoint. Three ideas in this thesis—the ontology-based data modelling approach, multi-objective optimization models, and the algorithms for solving the models—can be adapted and used to affect different aspects of disparate systems

    Identifying and Detecting Attacks in Industrial Control Systems

    Get PDF
    The integrity of industrial control systems (ICS) found in utilities, oil and natural gas pipelines, manufacturing plants and transportation is critical to national wellbeing and security. Such systems depend on hundreds of field devices to manage and monitor a physical process. Previously, these devices were specific to ICS but they are now being replaced by general purpose computing technologies and, increasingly, these are being augmented with Internet of Things (IoT) nodes. Whilst there are benefits to this approach in terms of cost and flexibility, it has attracted a wider community of adversaries. These include those with significant domain knowledge, such as those responsible for attacks on Iran’s Nuclear Facilities, a Steel Mill in Germany, and Ukraine’s power grid; however, non specialist attackers are becoming increasingly interested in the physical damage it is possible to cause. At the same time, the approach increases the number and range of vulnerabilities to which ICS are subject; regrettably, conventional techniques for analysing such a large attack space are inadequate, a cause of major national concern. In this thesis we introduce a generalisable approach based on evolutionary multiobjective algorithms to assist in identifying vulnerabilities in complex heterogeneous ICS systems. This is both challenging and an area that is currently lacking research. Our approach has been to review the security of currently deployed ICS systems, and then to make use of an internationally recognised ICS simulation testbed for experiments, assuming that the attacking community largely lack specific ICS knowledge. Using the simulator, we identified vulnerabilities in individual components and then made use of these to generate attacks. A defence against these attacks in the form of novel intrusion detection systems were developed, based on a range of machine learning models. Finally, this was further subject to attacks created using the evolutionary multiobjective algorithms, demonstrating, for the first time, the feasibility of creating sophisticated attacks against a well-protected adversary using automated mechanisms

    Efficient Learning Machines

    Get PDF
    Computer scienc
    corecore