3,765 research outputs found

    Reinforcement learning for efficient network penetration testing

    Get PDF
    Penetration testing (also known as pentesting or PT) is a common practice for actively assessing the defenses of a computer network by planning and executing all possible attacks to discover and exploit existing vulnerabilities. Current penetration testing methods are increasingly becoming non-standard, composite and resource-consuming despite the use of evolving tools. In this paper, we propose and evaluate an AI-based pentesting system which makes use of machine learning techniques, namely reinforcement learning (RL) to learn and reproduce average and complex pentesting activities. The proposed system is named Intelligent Automated Penetration Testing System (IAPTS) consisting of a module that integrates with industrial PT frameworks to enable them to capture information, learn from experience, and reproduce tests in future similar testing cases. IAPTS aims to save human resources while producing much-enhanced results in terms of time consumption, reliability and frequency of testing. IAPTS takes the approach of modeling PT environments and tasks as a partially observed Markov decision process (POMDP) problem which is solved by POMDP-solver. Although the scope of this paper is limited to network infrastructures PT planning and not the entire practice, the obtained results support the hypothesis that RL can enhance PT beyond the capabilities of any human PT expert in terms of time consumed, covered attacking vectors, accuracy and reliability of the outputs. In addition, this work tackles the complex problem of expertise capturing and re-use by allowing the IAPTS learning module to store and re-use PT policies in the same way that a human PT expert would learn but in a more efficient way

    Economic regulation for multi tenant infrastructures

    Get PDF
    Large scale computing infrastructures need scalable and effi cient resource allocation mechanisms to ful l the requirements of its participants and applications while the whole system is regulated to work e ciently. Computational markets provide e fficient allocation mechanisms that aggregate information from multiple sources in large, dynamic and complex systems where there is not a single source with complete information. They have been proven to be successful in matching resource demand and resource supply in the presence of sel sh multi-objective and utility-optimizing users and sel sh pro t-optimizing providers. However, global infrastructure metrics which may not directly affect participants of the computational market still need to be addressed -a.k.a. economic externalities like load balancing or energy-efficiency. In this thesis, we point out the need to address these economic externalities, and we design and evaluate appropriate regulation mechanisms from di erent perspectives on top of existing economic models, to incorporate a wider range of objective metrics not considered otherwise. Our main contributions in this thesis are threefold; fi rst, we propose a taxation mechanism that addresses the resource congestion problem e ffectively improving the balance of load among resources when correlated economic preferences are present; second, we propose a game theoretic model with complete information to derive an algorithm to aid resource providers to scale up and down resource supply so energy-related costs can be reduced; and third, we relax our previous assumptions about complete information on the resource provider side and design an incentive-compatible mechanism to encourage users to truthfully report their resource requirements effectively assisting providers to make energy-eff cient allocations while providing a dynamic allocation mechanism to users.Les infraestructures computacionals de gran escala necessiten mecanismes d’assignació de recursos escalables i eficients per complir amb els requisits computacionals de tots els seus participants, assegurant-se de que el sistema és regulat apropiadament per a que funcioni de manera efectiva. Els mercats computacionals són mecanismes d’assignació de recursos eficients que incorporen informació de diferents fonts considerant sistemes de gran escala, complexos i dinàmics on no existeix una única font que proveeixi informació completa de l'estat del sistema. Aquests mercats computacionals han demostrat ser exitosos per acomodar la demanda de recursos computacionals amb la seva oferta quan els seus participants son considerats estratègics des del punt de vist de teoria de jocs. Tot i això existeixen mètriques a nivell global sobre la infraestructura que no tenen per que influenciar els usuaris a priori de manera directa. Així doncs, aquestes externalitats econòmiques com poden ser el balanceig de càrrega o la eficiència energètica, conformen una línia d’investigació que cal explorar. En aquesta tesi, presentem i descrivim la problemàtica derivada d'aquestes externalitats econòmiques. Un cop establert el marc d’actuació, dissenyem i avaluem mecanismes de regulació apropiats basats en models econòmics existents per resoldre aquesta problemàtica des de diferents punts de vista per incorporar un ventall més ampli de mètriques objectiu que no havien estat considerades fins al moment. Les nostres contribucions principals tenen tres vessants: en primer lloc, proposem un mecanisme de regulació de tipus impositiu que tracta de mitigar l’aparició de recursos sobre-explotats que, efectivament, millora el balanceig de la càrrega de treball entre els recursos disponibles; en segon lloc, proposem un model teòric basat en teoria de jocs amb informació o completa que permet derivar un algorisme que facilita la tasca dels proveïdors de recursos per modi car a l'alça o a la baixa l'oferta de recursos per tal de reduir els costos relacionats amb el consum energètic; i en tercer lloc, relaxem la nostra assumpció prèvia sobre l’existència d’informació complerta per part del proveïdor de recursos i dissenyem un mecanisme basat en incentius per fomentar que els usuaris facin pública de manera verídica i explícita els seus requeriments computacionals, ajudant d'aquesta manera als proveïdors de recursos a fer assignacions eficients des del punt de vista energètic a la vegada que oferim un mecanisme l’assignació de recursos dinàmica als usuari

    Proceedings of Abstracts Engineering and Computer Science Research Conference 2019

    Get PDF
    © 2019 The Author(s). This is an open-access work distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. For further details please see https://creativecommons.org/licenses/by/4.0/. Note: Keynote: Fluorescence visualisation to evaluate effectiveness of personal protective equipment for infection control is © 2019 Crown copyright and so is licensed under the Open Government Licence v3.0. Under this licence users are permitted to copy, publish, distribute and transmit the Information; adapt the Information; exploit the Information commercially and non-commercially for example, by combining it with other Information, or by including it in your own product or application. Where you do any of the above you must acknowledge the source of the Information in your product or application by including or linking to any attribution statement specified by the Information Provider(s) and, where possible, provide a link to this licence: http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/This book is the record of abstracts submitted and accepted for presentation at the Inaugural Engineering and Computer Science Research Conference held 17th April 2019 at the University of Hertfordshire, Hatfield, UK. This conference is a local event aiming at bringing together the research students, staff and eminent external guests to celebrate Engineering and Computer Science Research at the University of Hertfordshire. The ECS Research Conference aims to showcase the broad landscape of research taking place in the School of Engineering and Computer Science. The 2019 conference was articulated around three topical cross-disciplinary themes: Make and Preserve the Future; Connect the People and Cities; and Protect and Care

    Automatic Resource Allocation for High Availability Cloud Services

    Get PDF
    AbstractThis paper proposes an approach to support cloud brokers finding optimal configurations in the deployment of dependability and security sensitive cloud applications. The approach is based on model-driven principles and uses both UML and Bayesian Networks to capture, analyse and optimise cloud deployment configurations. While the paper is most focused on the initial allocation phase, the approach is extensible to the operational phases of the life-cycle. In such a way, a continuous improvement of cloud applications may be realised by monitoring, enforcing and re-negotiating cloud resources following detected anomalies and failures

    Agency in the Internet of Things

    Get PDF
    This report summarises and extends the work done for the task force on IoT terminated in 2012. In response to DG CNECT request, the JRC studied this emergent technology following the methodologies pertaining to the Science and Technology Studies field. The aim of this document is therefore to present and to explore, on the basis of present day conceptions of relevant values, rights and norms, some of the “ethical issues” arising from the research, development and deployment of IoT, focusing on agency, autonomy and social justice. We start by exploring the types of imaginaries that seem to be entrenched and inspiring the developments of IoT and how they become portrayed in “normal” communication from corporations and promoters to the ordinary citizen (chapter 2). We report the empirical work we have conducted, namely the JRC contribution to the limited public debate initiated by the European Commission via the Your Voice portal during the Spring of 2012 (chapter 3) and an empirical exercise involving participants of two IoT conferences (chapter 4). This latter exercise sought to illustrate how our notions of goodness, trust, relationships, agency and autonomy are negotiated through the appropriation of unnoticed ordinary objects; this contributes to the discussion about ethical issues at stake with the emerging IoT vision beyond the right to privacy, data protection and security. Furthermore, based on literature review the report reflects on two of the main ethical issues that arise with the IoT vision: agency (and autonomy) and social justice (chapter 5), examining eventually governance alternatives of the challenged ethical issues (chapter 6).JRC.G.7-Digital Citizen Securit

    Exploring traffic and QoS management mechanisms to support mobile cloud computing using service localisation in heterogeneous environments

    Get PDF
    In recent years, mobile devices have evolved to support an amalgam of multimedia applications and content. However, the small size of these devices poses a limit the amount of local computing resources. The emergence of Cloud technology has set the ground for an era of task offloading for mobile devices and we are now seeing the deployment of applications that make more extensive use of Cloud processing as a means of augmenting the capabilities of mobiles. Mobile Cloud Computing is the term used to describe the convergence of these technologies towards applications and mechanisms that offload tasks from mobile devices to the Cloud. In order for mobile devices to access Cloud resources and successfully offload tasks there, a solution for constant and reliable connectivity is required. The proliferation of wireless technology ensures that networks are available almost everywhere in an urban environment and mobile devices can stay connected to a network at all times. However, user mobility is often the cause of intermittent connectivity that affects the performance of applications and ultimately degrades the user experience. 5th Generation Networks are introducing mechanisms that enable constant and reliable connectivity through seamless handovers between networks and provide the foundation for a tighter coupling between Cloud resources and mobiles. This convergence of technologies creates new challenges in the areas of traffic management and QoS provisioning. The constant connectivity to and reliance of mobile devices on Cloud resources have the potential of creating large traffic flows between networks. Furthermore, depending on the type of application generating the traffic flow, very strict QoS may be required from the networks as suboptimal performance may severely degrade an application’s functionality. In this thesis, I propose a new service delivery framework, centred on the convergence of Mobile Cloud Computing and 5G networks for the purpose of optimising service delivery in a mobile environment. The framework is used as a guideline for identifying different aspects of service delivery in a mobile environment and for providing a path for future research in this field. The focus of the thesis is placed on the service delivery mechanisms that are responsible for optimising the QoS and managing network traffic. I present a solution for managing traffic through dynamic service localisation according to user mobility and device connectivity. I implement a prototype of the solution in a virtualised environment as a proof of concept and demonstrate the functionality and results gathered from experimentation. Finally, I present a new approach to modelling network performance by taking into account user mobility. The model considers the overall performance of a persistent connection as the mobile node switches between different networks. Results from the model can be used to determine which networks will negatively affect application performance and what impact they will have for the duration of the user's movement. The proposed model is evaluated using an analytical approac

    Cognitive Robotics in Industrial Environments

    Get PDF

    Optimisation of John the Ripper in a clustered Linux environment

    Get PDF
    To aid system administrators in enforcing strict password policies, the use of password cracking tools such as Cisilia (C.I.S.I.ar, 2003) and John the Ripper (Solar Designer, 2002), have been employed as software utilities to look for weak passwords. John the Ripper (JtR) attempts to crack the passwords by using a dictionary, brute-force or other mode of attack. The computational intensity of cracking passwords has led to the utilisation of parallel-processing environments to increase the speed of the password-cracking task. Parallel-processing environments can consist of either single systems with multiple processors, or a collection of separate computers working together as a single, logical computer system; both of these configurations allow operations to run concurrently. This study aims to optimise and compare the execution of JtR on a pair of Beowulf clusters, which arc a collection of computers configured to run in a parallel manner. Each of the clusters will run the Rocks cluster distribution, which is a Linux RedHat based cluster-toolkit. An implementation of the Message Passing Interface (MPI), MPICH, will be used for inter-node communication, allowing the password cracker to run in a parallel manner. Experiments were performed to test the reliability of cracking a single set of password samples on both a 32-bit and 64-bit Beowulf cluster comprised of Intel Pentium and AMD64 Opteron processors respectively. These experiments were also used to test the effectiveness of the brute-force attack against the dictionary attack of JtR. The results from this thesis may provide assistance to organisations in enforcing strong password policies on user accounts through the use of computer clusters and also to examine the possibility of using JtR as a tool to reliably measure password strength
    corecore