885 research outputs found

    Engineering Resilient Collective Adaptive Systems by Self-Stabilisation

    Get PDF
    Collective adaptive systems are an emerging class of networked computational systems, particularly suited in application domains such as smart cities, complex sensor networks, and the Internet of Things. These systems tend to feature large scale, heterogeneity of communication model (including opportunistic peer-to-peer wireless interaction), and require inherent self-adaptiveness properties to address unforeseen changes in operating conditions. In this context, it is extremely difficult (if not seemingly intractable) to engineer reusable pieces of distributed behaviour so as to make them provably correct and smoothly composable. Building on the field calculus, a computational model (and associated toolchain) capturing the notion of aggregate network-level computation, we address this problem with an engineering methodology coupling formal theory and computer simulation. On the one hand, functional properties are addressed by identifying the largest-to-date field calculus fragment generating self-stabilising behaviour, guaranteed to eventually attain a correct and stable final state despite any transient perturbation in state or topology, and including highly reusable building blocks for information spreading, aggregation, and time evolution. On the other hand, dynamical properties are addressed by simulation, empirically evaluating the different performances that can be obtained by switching between implementations of building blocks with provably equivalent functional properties. Overall, our methodology sheds light on how to identify core building blocks of collective behaviour, and how to select implementations that improve system performance while leaving overall system function and resiliency properties unchanged.Comment: To appear on ACM Transactions on Modeling and Computer Simulatio

    Availability by Design:A Complementary Approach to Denial-of-Service

    Get PDF

    Adaptive reinforcement learning for heterogeneous network selection

    Get PDF
    Next generation 5G mobile wireless networks will consist of multiple technologies for devices to access the network at the edge. One of the keys to 5G is therefore the ability for device to intelligently select its Radio Access Technology (RAT). Current fully distributed algorithms for RAT selection although guaranteeing convergence to equilibrium states, are often slow, require high exploration times and may converge to undesirable equilibria. In this dissertation, we propose three novel reinforcement learning (RL) frameworks to improve the efficiency of existing distributed RAT selection algorithms in a heterogeneous environment, where users may potentially apply a number of different RAT selection procedures. Although our research focuses on solutions for RAT selection in the current and future mobile wireless networks, the proposed solutions in this dissertation are general and suitable to apply for any large scale distributed multi-agent systems. In the first framework, called RL with Non-positive Regret, we propose a novel adaptive RL for multi-agent non-cooperative repeated games. The main contribution is to use both positive and negative regrets in RL to improve the convergence speed and fairness of the well-known regret-based RL procedure. Significant improvements in performance compared to other related algorithms in the literature are demonstrated. In the second framework, called RL with Network-Assisted Feedback (RLNF), our core contribution is to develop a network feedback model that uses network-assisted information to improve the performance of the distributed RL for RAT selection. RLNF guarantees no-regret payoff in the long-run for any user adopting it, regardless of what other users might do and so can work in an environment where not all users use the same learning strategy. This is an important implementation advantage as RLNF can be implemented within current mobile network standards. In the third framework, we propose a novel adaptive RL-based mechanism for RAT selection that can effectively handle user mobility. The key contribution is to leverage forgetting methods to rapidly react to the changes in the radio conditions when users move. We show that our solution improves the performance of wireless networks and converges much faster when users move compared to the non-adaptive solutions. Another objective of the research is to study the impact of various network models on the performance of different RAT selection approaches. We propose a unified benchmark to compare the performances of different algorithms under the same computational environment. The comparative studies reveal that among all the important network parameters that influence the performance of RAT selection algorithms, the number of base stations that a user can connect to has the most significant impact. This finding provides some guidelines for the proper design of RAT selection algorithms for future 5G. Our evaluation benchmark can serve as a reference for researchers, network developers, and engineers. Overall, the thesis provides different reinforcement learning frameworks to improve the efficiency of current fully distributed algorithms for heterogeneous RAT selection. We prove the convergence of the proposed reinforcement learning procedures using the differential inclusion (DI) technique. The theoretical analyses demonstrate that the use of DI not only provides an effective method to study the convergence properties of adaptive procedures in game-theoretic learning, but also yields a much more concise and extensible proof as compared to the classical approaches.Thesis (Ph.D.) -- University of Adelaide, School of Electrical and Electronic Engineering, 201

    An architectural framework for self-configuration and self-improvement at runtime

    Get PDF
    [no abstract
    • …
    corecore