9,339 research outputs found

    A Firewall Optimization for Threat-Resilient Micro-Segmentation in Power System Networks

    Full text link
    Electric power delivery relies on a communications backbone that must be secure. SCADA systems are essential to critical grid functions and include industrial control systems (ICS) protocols such as the Distributed Network Protocol-3 (DNP3). These protocols are vulnerable to cyber threats that power systems, as cyber-physical critical infrastructure, must be protected against. For this reason, the NERC Critical Infrastructure Protection standard CIP-005-5 specifies that an electronic system perimeter is needed, accomplished with firewalls. This paper presents how these electronic system perimeters can be optimally found and generated using a proposed meta-heuristic approach for optimal security zone formation for large-scale power systems. Then, to implement the optimal firewall rules in a large scale power system model, this work presents a prototype software tool that takes the optimization results and auto-configures the firewall nodes for different utilities in a cyber-physical testbed. Using this tool, firewall policies are configured for all the utilities and their substations within a synthetic 2000-bus model, assuming two different network topologies. Results generate the optimal electronic security perimeters to protect a power system's data flows and compare the number of firewalls, monetary cost, and risk alerts from path analysis.Comment: 12 pages, 22 figure

    Modular lifelong machine learning

    Get PDF
    Deep learning has drastically improved the state-of-the-art in many important fields, including computer vision and natural language processing (LeCun et al., 2015). However, it is expensive to train a deep neural network on a machine learning problem. The overall training cost further increases when one wants to solve additional problems. Lifelong machine learning (LML) develops algorithms that aim to efficiently learn to solve a sequence of problems, which become available one at a time. New problems are solved with less resources by transferring previously learned knowledge. At the same time, an LML algorithm needs to retain good performance on all encountered problems, thus avoiding catastrophic forgetting. Current approaches do not possess all the desired properties of an LML algorithm. First, they primarily focus on preventing catastrophic forgetting (Diaz-Rodriguez et al., 2018; Delange et al., 2021). As a result, they neglect some knowledge transfer properties. Furthermore, they assume that all problems in a sequence share the same input space. Finally, scaling these methods to a large sequence of problems remains a challenge. Modular approaches to deep learning decompose a deep neural network into sub-networks, referred to as modules. Each module can then be trained to perform an atomic transformation, specialised in processing a distinct subset of inputs. This modular approach to storing knowledge makes it easy to only reuse the subset of modules which are useful for the task at hand. This thesis introduces a line of research which demonstrates the merits of a modular approach to lifelong machine learning, and its ability to address the aforementioned shortcomings of other methods. Compared to previous work, we show that a modular approach can be used to achieve more LML properties than previously demonstrated. Furthermore, we develop tools which allow modular LML algorithms to scale in order to retain said properties on longer sequences of problems. First, we introduce HOUDINI, a neurosymbolic framework for modular LML. HOUDINI represents modular deep neural networks as functional programs and accumulates a library of pre-trained modules over a sequence of problems. Given a new problem, we use program synthesis to select a suitable neural architecture, as well as a high-performing combination of pre-trained and new modules. We show that our approach has most of the properties desired from an LML algorithm. Notably, it can perform forward transfer, avoid negative transfer and prevent catastrophic forgetting, even across problems with disparate input domains and problems which require different neural architectures. Second, we produce a modular LML algorithm which retains the properties of HOUDINI but can also scale to longer sequences of problems. To this end, we fix the choice of a neural architecture and introduce a probabilistic search framework, PICLE, for searching through different module combinations. To apply PICLE, we introduce two probabilistic models over neural modules which allows us to efficiently identify promising module combinations. Third, we phrase the search over module combinations in modular LML as black-box optimisation, which allows one to make use of methods from the setting of hyperparameter optimisation (HPO). We then develop a new HPO method which marries a multi-fidelity approach with model-based optimisation. We demonstrate that this leads to improvement in anytime performance in the HPO setting and discuss how this can in turn be used to augment modular LML methods. Overall, this thesis identifies a number of important LML properties, which have not all been attained in past methods, and presents an LML algorithm which can achieve all of them, apart from backward transfer

    Evaluation of different segmentation-based approaches for skin disorders from dermoscopic images

    Full text link
    Treballs Finals de Grau d'Enginyeria Biomèdica. Facultat de Medicina i Ciències de la Salut. Universitat de Barcelona. Curs: 2022-2023. Tutor/Director: Sala Llonch, Roser, Mata Miquel, Christian, Munuera, JosepSkin disorders are the most common type of cancer in the world and the incident has been lately increasing over the past decades. Even with the most complex and advanced technologies, current image acquisition systems do not permit a reliable identification of the skin lesion by visual examination due to the challenging structure of the malignancy. This promotes the need for the implementation of automatic skin lesion segmentation methods in order to assist in physicians’ diagnostic when determining the lesion's region and to serve as a preliminary step for the classification of the skin lesion. Accurate and precise segmentation is crucial for a rigorous screening and monitoring of the disease's progression. For the purpose of the commented concern, the present project aims to accomplish a state-of-the-art review about the most predominant conventional segmentation models for skin lesion segmentation, alongside with a market analysis examination. With the rise of automatic segmentation tools, a wide number of algorithms are currently being used, but many are the drawbacks when employing them for dermatological disorders due to the high-level presence of artefacts in the image acquired. In light of the above, three segmentation techniques have been selected for the completion of the work: level set method, an algorithm combining GrabCut and k-means methods and an intensity automatic algorithm developed by Hospital Sant Joan de Déu de Barcelona research group. In addition, a validation of their performance is conducted for a further implementation of them in clinical training. The proposals, together with the got outcomes, have been accomplished by means of a publicly available skin lesion image database

    Automated Mapping of Adaptive App GUIs from Phones to TVs

    Full text link
    With the increasing interconnection of smart devices, users often desire to adopt the same app on quite different devices for identical tasks, such as watching the same movies on both their smartphones and TV. However, the significant differences in screen size, aspect ratio, and interaction styles make it challenging to adapt Graphical User Interfaces (GUIs) across these devices. Although there are millions of apps available on Google Play, only a few thousand are designed to support smart TV displays. Existing techniques to map a mobile app GUI to a TV either adopt a responsive design, which struggles to bridge the substantial gap between phone and TV or use mirror apps for improved video display, which requires hardware support and extra engineering efforts. Instead of developing another app for supporting TVs, we propose a semi-automated approach to generate corresponding adaptive TV GUIs, given the phone GUIs as the input. Based on our empirical study of GUI pairs for TV and phone in existing apps, we synthesize a list of rules for grouping and classifying phone GUIs, converting them to TV GUIs, and generating dynamic TV layouts and source code for the TV display. Our tool is not only beneficial to developers but also to GUI designers, who can further customize the generated GUIs for their TV app development. An evaluation and user study demonstrate the accuracy of our generated GUIs and the usefulness of our tool.Comment: 30 pages, 15 figure

    Evaluation of viral therapy for cancer treatment

    Full text link
    Cancer has been increasing and has a profound impact on society. Current treatments are known to have limits so searching for more effective cancer treatments have led scientists to theorize using viruses. Viruses are submicroscopic and consist of nucleic acid that’s either RNA or DNA and can be single-stranded or double-stranded. Oncolytic Viruses uses a virus’s natural ability to infect certain cells to target and destroy cancer cells through cell lysis, apoptosis, and modifications of the cell surface membrane. They can carry transgenes such as GM-CSF, which promotes host immune responses. Oncolytic activities such as local replication and propagation also induce cytokines that engage the immune system to increase antitumor immunity. T-Vec is an oncolytic viral drug that’s recently approved for use, and many other drugs are in clinical trials awaiting approval. The common goal of these drugs is prolonging the survival of cancer patients and inducing patient specific anti-tumor immunity. These drugs have certain advantages over traditional therapies but also potential risks; a proposed study to analyze G47Δ can be used to explore the safety and efficacy of oncolytic viruses compared to traditional therapies such as chemotherapy. In this proposed study, subjects over the age of 18 are studied longitudinally over 5 years. Two hundred participants with advanced stage melanoma, breast cancer, and/or prostate cancer, where radiation and chemotherapy have not slowed the progression of disease, are enrolled and split into two groups receiving either Paclitaxel and Docetaxel or 3 x 10^8 pfu of G47Δ injected intravenously. They are monitored for severity with the statistics averaged together and a standard deviation computed for all the data. Overall, viral therapy has been shown to have a tolerable safety profile with only low-grade adverse events and largely non-overlapping toxicity with other cancer therapeutics. This is especially true given the number of late-stage patients enrolled in these studies, the severity of their diseases, as well as the side-effects associated with chemotherapy and radiation. While usage of oncolytic viral therapies is currently still low, it is hoped that it will not be long before this becomes a standard option for treatment of all cancer patients

    A Reinforcement Learning-assisted Genetic Programming Algorithm for Team Formation Problem Considering Person-Job Matching

    Full text link
    An efficient team is essential for the company to successfully complete new projects. To solve the team formation problem considering person-job matching (TFP-PJM), a 0-1 integer programming model is constructed, which considers both person-job matching and team members' willingness to communicate on team efficiency, with the person-job matching score calculated using intuitionistic fuzzy numbers. Then, a reinforcement learning-assisted genetic programming algorithm (RL-GP) is proposed to enhance the quality of solutions. The RL-GP adopts the ensemble population strategies. Before the population evolution at each generation, the agent selects one from four population search modes according to the information obtained, thus realizing a sound balance of exploration and exploitation. In addition, surrogate models are used in the algorithm to evaluate the formation plans generated by individuals, which speeds up the algorithm learning process. Afterward, a series of comparison experiments are conducted to verify the overall performance of RL-GP and the effectiveness of the improved strategies within the algorithm. The hyper-heuristic rules obtained through efficient learning can be utilized as decision-making aids when forming project teams. This study reveals the advantages of reinforcement learning methods, ensemble strategies, and the surrogate model applied to the GP framework. The diversity and intelligent selection of search patterns along with fast adaptation evaluation, are distinct features that enable RL-GP to be deployed in real-world enterprise environments.Comment: 16 page

    When to be critical? Performance and evolvability in different regimes of neural Ising agents

    Full text link
    It has long been hypothesized that operating close to the critical state is beneficial for natural, artificial and their evolutionary systems. We put this hypothesis to test in a system of evolving foraging agents controlled by neural networks that can adapt agents' dynamical regime throughout evolution. Surprisingly, we find that all populations that discover solutions, evolve to be subcritical. By a resilience analysis, we find that there are still benefits of starting the evolution in the critical regime. Namely, initially critical agents maintain their fitness level under environmental changes (for example, in the lifespan) and degrade gracefully when their genome is perturbed. At the same time, initially subcritical agents, even when evolved to the same fitness, are often inadequate to withstand the changes in the lifespan and degrade catastrophically with genetic perturbations. Furthermore, we find the optimal distance to criticality depends on the task complexity. To test it we introduce a hard and simple task: for the hard task, agents evolve closer to criticality whereas more subcritical solutions are found for the simple task. We verify that our results are independent of the selected evolutionary mechanisms by testing them on two principally different approaches: a genetic algorithm and an evolutionary strategy. In summary, our study suggests that although optimal behaviour in the simple task is obtained in a subcritical regime, initializing near criticality is important to be efficient at finding optimal solutions for new tasks of unknown complexity.Comment: arXiv admin note: substantial text overlap with arXiv:2103.1218

    Optimizing Weights And Biases in MLP Using Whale Optimization Algorithm

    Get PDF
    Artificial Neural Networks are intelligent and non-parametric mathematical models inspired by the human nervous system. They have been widely studied and applied for classification, pattern recognition and forecasting problems. The main challenge of training an Artificial Neural network is its learning process, the nonlinear nature and the unknown best set of main controlling parameters (weights and biases). When the Artificial Neural Networks are trained using the conventional training algorithm, they get caught in the local optima stagnation and slow convergence speed; this makes the stochastic optimization algorithm a definitive alternative to alleviate the drawbacks. This thesis proposes an algorithm based on the recently proposed Whale Optimization Algorithm(WOA). The algorithm has proven to solve a wide range of optimization problems and outperform existing algorithms. The successful implementation of this algorithm motivated our attempts to benchmark its performance in training feed-forward neural networks. We have taken a set of 20 datasets with different difficulty levels and tested the proposed WOA-MLP based trainer. Further, the results are verified by comparing WOA-MLP with the back propagation algorithms and six evolutionary techniques. The results have proved that the proposed trainer can outperform the current algorithms on the majority of datasets in terms of local optima avoidance and convergence speed

    Towards A Graphene Chip System For Blood Clotting Disease Diagnostics

    Get PDF
    Point of care diagnostics (POCD) allows the rapid, accurate measurement of analytes near to a patient. This enables faster clinical decision making and can lead to earlier diagnosis and better patient monitoring and treatment. However, despite many prospective POCD devices being developed for a wide range of diseases this promised technology is yet to be translated to a clinical setting due to the lack of a cost-effective biosensing platform.This thesis focuses on the development of a highly sensitive, low cost and scalable biosensor platform that combines graphene with semiconductor fabrication tech-niques to create graphene field-effect transistors biosensor. The key challenges of designing and fabricating a graphene-based biosensor are addressed. This work fo-cuses on a specific platform for blood clotting disease diagnostics, but the platform has the capability of being applied to any disease with a detectable biomarker.Multiple sensor designs were tested during this work that maximised sensor ef-ficiency and costs for different applications. The multiplex design enabled different graphene channels on the same chip to be functionalised with unique chemistry. The Inverted MOSFET design was created, which allows for back gated measurements to be performed whilst keeping the graphene channel open for functionalisation. The Shared Source and Matrix design maximises the total number of sensing channels per chip, resulting in the most cost-effective fabrication approach for a graphene-based sensor (decreasing cost per channel from £9.72 to £4.11).The challenge of integrating graphene into a semiconductor fabrication process is also addressed through the development of a novel vacuum transfer method-ology that allows photoresist free transfer. The two main fabrication processes; graphene supplied on the wafer “Pre-Transfer” and graphene transferred after met-allisation “Post-Transfer” were compared in terms of graphene channel resistance and graphene end quality (defect density and photoresist). The Post-Transfer pro-cess higher quality (less damage, residue and doping, confirmed by Raman spec-troscopy).Following sensor fabrication, the next stages of creating a sensor platform involve the passivation and packaging of the sensor chip. Different approaches using dielec-tric deposition approaches are compared for passivation. Molecular Vapour Deposi-tion (MVD) deposited Al2O3 was shown to produce graphene channels with lower damage than unprocessed graphene, and also improves graphene doping bringing the Dirac point of the graphene close to 0 V. The packaging integration of microfluidics is investigated comparing traditional soft lithography approaches and the new 3D printed microfluidic approach. Specific microfluidic packaging for blood separation towards a blood sampling point of care sensor is examined to identify the laminar approach for lower blood cell count, as a method of pre-processing the blood sample before sensing.To test the sensitivity of the Post-Transfer MVD passivated graphene sensor de-veloped in this work, real-time IV measurements were performed to identify throm-bin protein binding in real-time on the graphene surface. The sensor was function-alised using a thrombin specific aptamer solution and real-time IV measurements were performed on the functionalised graphene sensor with a range of biologically relevant protein concentrations. The resulting sensitivity of the graphene sensor was in the 1-100 pg/ml concentration range, producing a resistance change of 0.2% per pg/ml. Specificity was confirmed using a non-thrombin specific aptamer as the neg-ative control. These results indicate that the graphene sensor platform developed in this thesis has the potential as a highly sensitive POCD. The processes developed here can be used to develop graphene sensors for multiple biomarkers in the future

    Simulation and Optimization of Scheduling Policies in Dynamic Stochastic Resource-Constrained Multi-Project Environments

    Get PDF
    The goal of the Project Management is to organise project schedules to complete projects before their completion dates, specified in their contract. When a project is beyond its completion date, organisations may lose the rewards from project completion as well as their organisational prestige. Project Management involves many uncertain factors such as unknown new project arrival dates and unreliable task duration predictions, which may affect project schedules that lead to delivery overruns. Successful Project Management could be done by considering these uncertainties. In this PhD study, we aim to create a more comprehensive model which considers a system where projects (of multiple types) arrive at random to the resource-constrained environment for which rewards for project delivery are impacted by fees for late project completion and tasks may complete sooner or later than expected task duration. In this thesis, we considered two extensions of the resource-constrained multi-project scheduling problem (RCMPSP) in dynamic environments. RCMPSP requires scheduling tasks of multiple projects simultaneously using a pool of limited renewable resources, and its goal usually is the shortest make-span or the highest profit. The first extension of RCMPSP is the dynamic resource-constrained multi-project scheduling problem. Dynamic in this problem refers that new projects arrive randomly during the ongoing project execution, which disturbs the existing project scheduling plan. The second extension of RCMPSP is the dynamic and stochastic resource-constrained multi-project scheduling problem. Dynamic and stochastic represent that both random new projects arrivals and stochastic task durations. In these problems, we assumed that projects generate rewards at their completion; completions later than a due date cause tardiness costs, and we seek to maximise average profits per unit time or the expected discounted long-run profit. We model these problems as infinite-horizon discrete-time Markov decision processes
    corecore