90,160 research outputs found

    Reinforcement learning for efficient network penetration testing

    Get PDF
    Penetration testing (also known as pentesting or PT) is a common practice for actively assessing the defenses of a computer network by planning and executing all possible attacks to discover and exploit existing vulnerabilities. Current penetration testing methods are increasingly becoming non-standard, composite and resource-consuming despite the use of evolving tools. In this paper, we propose and evaluate an AI-based pentesting system which makes use of machine learning techniques, namely reinforcement learning (RL) to learn and reproduce average and complex pentesting activities. The proposed system is named Intelligent Automated Penetration Testing System (IAPTS) consisting of a module that integrates with industrial PT frameworks to enable them to capture information, learn from experience, and reproduce tests in future similar testing cases. IAPTS aims to save human resources while producing much-enhanced results in terms of time consumption, reliability and frequency of testing. IAPTS takes the approach of modeling PT environments and tasks as a partially observed Markov decision process (POMDP) problem which is solved by POMDP-solver. Although the scope of this paper is limited to network infrastructures PT planning and not the entire practice, the obtained results support the hypothesis that RL can enhance PT beyond the capabilities of any human PT expert in terms of time consumed, covered attacking vectors, accuracy and reliability of the outputs. In addition, this work tackles the complex problem of expertise capturing and re-use by allowing the IAPTS learning module to store and re-use PT policies in the same way that a human PT expert would learn but in a more efficient way

    The SECURE collaboration model

    Get PDF
    The SECURE project has shown how trust can be made computationally tractable while retaining a reasonable connection with human and social notions of trust. SECURE has produced a well-founded theory of trust that has been tested and refined through use in real software such as collaborative spam filtering and electronic purse. The software comprises the SECURE kernel with extensions for policy specification by application developers. It has yet to be applied to large-scale, multi-domain distributed systems taking different application contexts into account. The project has not considered privacy in evidence distribution, a crucial issue for many application domains, including public services such as healthcare and police. The SECURE collaboration model has similarities with the trust domain concept, embodying the interaction set of a principal, but SECURE is primarily concerned with pseudonymous entities rather than domain-structured systems

    Multi-agent quality of experience control

    Get PDF
    In the framework of the Future Internet, the aim of the Quality of Experience (QoE) Control functionalities is to track the personalized desired QoE level of the applications. The paper proposes to perform such a task by dynamically selecting the most appropriate Classes of Service (among the ones supported by the network), this selection being driven by a novel heuristic Multi-Agent Reinforcement Learning (MARL) algorithm. The paper shows that such an approach offers the opportunity to cope with some practical implementation problems: in particular, it allows to face the so-called “curse of dimensionality” of MARL algorithms, thus achieving satisfactory performance results even in the presence of several hundreds of Agents

    Challenges in Complex Systems Science

    Get PDF
    FuturICT foundations are social science, complex systems science, and ICT. The main concerns and challenges in the science of complex systems in the context of FuturICT are laid out in this paper with special emphasis on the Complex Systems route to Social Sciences. This include complex systems having: many heterogeneous interacting parts; multiple scales; complicated transition laws; unexpected or unpredicted emergence; sensitive dependence on initial conditions; path-dependent dynamics; networked hierarchical connectivities; interaction of autonomous agents; self-organisation; non-equilibrium dynamics; combinatorial explosion; adaptivity to changing environments; co-evolving subsystems; ill-defined boundaries; and multilevel dynamics. In this context, science is seen as the process of abstracting the dynamics of systems from data. This presents many challenges including: data gathering by large-scale experiment, participatory sensing and social computation, managing huge distributed dynamic and heterogeneous databases; moving from data to dynamical models, going beyond correlations to cause-effect relationships, understanding the relationship between simple and comprehensive models with appropriate choices of variables, ensemble modeling and data assimilation, modeling systems of systems of systems with many levels between micro and macro; and formulating new approaches to prediction, forecasting, and risk, especially in systems that can reflect on and change their behaviour in response to predictions, and systems whose apparently predictable behaviour is disrupted by apparently unpredictable rare or extreme events. These challenges are part of the FuturICT agenda

    Simulation models of technological innovation: A Review

    Get PDF
    The use of simulation modelling techniques in studies of technological innovation dates back to Nelson and Winter''s 1982 book "An Evolutionary Theory of Economic Change" and is an area which has been steadily expanding ever since. Four main issues are identified in reviewing the key contributions that have been made to this burgeoning literature. Firstly, a key driver in the construction of computer simulations has been the desire to develop more complicated theoretical models capable of dealing with the complex phenomena characteristic of technological innovation. Secondly, no single model captures all of the dimensions and stylised facts of innovative learning. Indeed this paper argues that one can usefully distinguish between the various contributions according to the particular dimensions of the learning process which they explore. To this end the paper develops a taxonomy which usefully distinguishes between these dimensions and also clarifies the quite different perspectives underpinning the contributions made by mainstream economists and non-mainstream, neo-Schumpeterian economists. This brings us to a third point highlighted in the paper. The character of simulation models which are developed are heavily influenced by the generic research questions of these different schools of thought. Finally, attention is drawn to an important distinction between the process of learning and adaptation within a static environment, and dynamic environments in which the introduction of new artefacts and patterns of behaviour change the selective pressure faced by agents. We show that modellers choosing to explore one or other of these settings reveal their quite different conceptual understandings of "technological innovation".economics of technology ;

    The social sciences and the web : From ‘Lurking’ to interdisciplinary ‘Big Data’ research

    Get PDF
    Acknowledgements This research is supported by the award made by the RCUK Digital Economy theme to the dot.rural Digital Economy Hub (award reference: EP/G066051/1) and the UK Economic & Social Research Council (ESRC) (award reference: ES/M001628/1).Peer reviewedPublisher PD

    Heterogeneous Agent Models in Economics and Finance, In: Handbook of Computational Economics II: Agent-Based Computational Economics, edited by Leigh Tesfatsion and Ken Judd , Elsevier, Amsterdam 2006, pp.1109-1186.

    Get PDF
    This chapter surveys work on dynamic heterogeneous agent models (HAMs) in economics and finance. Emphasis is given to simple models that, at least to some extent, are tractable by analytic methods in combination with computational tools. Most of these models are behavioral models with boundedly rational agents using different heuristics or rule of thumb strategies that may not be perfect, but perform reasonably well. Typically these models are highly nonlinear, e.g. due to evolutionary switching between strategies, and exhibit a wide range of dynamical behavior ranging from a unique stable steady state to complex, chaotic dynamics. Aggregation of simple interactions at the micro level may generate sophisticated structure at the macro level. Simple HAMs can explain important observed stylized facts in financial time series, such as excess volatility, high trading volume, temporary bubbles and trend following, sudden crashes and mean reversion, clustered volatility and fat tails in the returns distribution.
    • 

    corecore