4,364 research outputs found

    Learning preferences for personalisation in a pervasive environment

    Get PDF
    With ever increasing accessibility to technological devices, services and applications there is also an increasing burden on the end user to manage and configure such resources. This burden will continue to increase as the vision of pervasive environments, with ubiquitous access to a plethora of resources, continues to become a reality. It is key that appropriate mechanisms to relieve the user of such burdens are developed and provided. These mechanisms include personalisation systems that can adapt resources on behalf of the user in an appropriate way based on the user's current context and goals. The key knowledge base of many personalisation systems is the set of user preferences that indicate what adaptations should be performed under which contextual situations. This thesis investigates the challenges of developing a system that can learn such preferences by monitoring user behaviour within a pervasive environment. Based on the findings of related works and experience from EU project research, several key design requirements for such a system are identified. These requirements are used to drive the design of a system that can learn accurate and up to date preferences for personalisation in a pervasive environment. A standalone prototype of the preference learning system has been developed. In addition the preference learning system has been integrated into a pervasive platform developed through an EU research project. The preference learning system is fully evaluated in terms of its machine learning performance and also its utility in a pervasive environment with real end users

    AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments

    Get PDF
    This report considers the application of Articial Intelligence (AI) techniques to the problem of misuse detection and misuse localisation within telecommunications environments. A broad survey of techniques is provided, that covers inter alia rule based systems, model-based systems, case based reasoning, pattern matching, clustering and feature extraction, articial neural networks, genetic algorithms, arti cial immune systems, agent based systems, data mining and a variety of hybrid approaches. The report then considers the central issue of event correlation, that is at the heart of many misuse detection and localisation systems. The notion of being able to infer misuse by the correlation of individual temporally distributed events within a multiple data stream environment is explored, and a range of techniques, covering model based approaches, `programmed' AI and machine learning paradigms. It is found that, in general, correlation is best achieved via rule based approaches, but that these suffer from a number of drawbacks, such as the difculty of developing and maintaining an appropriate knowledge base, and the lack of ability to generalise from known misuses to new unseen misuses. Two distinct approaches are evident. One attempts to encode knowledge of known misuses, typically within rules, and use this to screen events. This approach cannot generally detect misuses for which it has not been programmed, i.e. it is prone to issuing false negatives. The other attempts to `learn' the features of event patterns that constitute normal behaviour, and, by observing patterns that do not match expected behaviour, detect when a misuse has occurred. This approach is prone to issuing false positives, i.e. inferring misuse from innocent patterns of behaviour that the system was not trained to recognise. Contemporary approaches are seen to favour hybridisation, often combining detection or localisation mechanisms for both abnormal and normal behaviour, the former to capture known cases of misuse, the latter to capture unknown cases. In some systems, these mechanisms even work together to update each other to increase detection rates and lower false positive rates. It is concluded that hybridisation offers the most promising future direction, but that a rule or state based component is likely to remain, being the most natural approach to the correlation of complex events. The challenge, then, is to mitigate the weaknesses of canonical programmed systems such that learning, generalisation and adaptation are more readily facilitated

    NOVEL USER-CENTRIC ARCHITECTURES FOR FUTURE GENERATION CELLULAR NETWORKS: DESIGN, ANALYSIS AND PERFORMANCE OPTIMIZATION

    Get PDF
    Ambitious targets for aggregate throughput, energy efficiency (EE) and ubiquitous user experience are propelling the advent of ultra-dense networks. Inter-cell interference and high energy consumption in an ultra-dense network are the prime hindering factors in pursuit of these goals. To address this challenge, we investigate the idea of transforming network design from being base station-centric to user-centric. To this end, we develop mathematical framework and analyze multiple variants of the user-centric networks, with the help of advanced scientific tools such as stochastic geometry, game theory, optimization theory and deep neural networks. We first present a user-centric radio access network (RAN) design and then propose novel base station association mechanisms by forming virtual dedicated cells around users scheduled for downlink. The design question that arises is what should the ideal size of the dedicated regions around scheduled users be? To answer this question, we follow a stochastic geometry based approach to quantify the area spectral efficiency (ASE) and energy efficiency (EE) of a user-centric Cloud RAN architecture. Observing that the two efficiency metrics have conflicting optimal user-centric cell sizes, we propose a game theoretic self-organizing network (GT-SON) framework that can orchestrate the network between ASE and EE focused operational modes in real-time in response to changes in network conditions and the operator's revenue model, to achieve a Pareto optimal solution. The designed model is shown to outperform base-station centric design in terms of both ASE and EE in dense deployment scenarios. Taking this user-centric approach as a baseline, we improve the ASE and EE performance by introducing flexibility in the dimensions of the user-centric regions as a function of data requirement for each device. So instead of optimizing the network-wide ASE or EE, each user device competes for a user-centric region based on its data requirements. This competition is modeled via an evolutionary game and a Vickrey-Clarke-Groves auction. The data requirement based flexibility in the user-centric RAN architecture not only improves the ASE and EE, but also reduces the scheduling wait time per user. Offloading dense user hotspots to low range mmWave cells promises to meet the enhance mobile broadband requirement of 5G and beyond. To investigate how the three key enablers; i.e. user-centric virtual cell design, ultra-dense deployments and mmWave communication; are integrated in a multi-tier Stienen geometry based user-centric architecture. Taking into account the characteristics of mmWave propagation channel such as blockage and fading, we develop a statistical framework for deriving the coverage probability of an arbitrary user equipment scheduled within the proposed architecture. A key advantage observed through this architecture is significant reduction in the scheduling latency as compared to the baseline user-centric model. Furthermore, the interplay between certain system design parameters was found to orchestrate the ASE-EE tradeoff within the proposed network design. We extend this work by framing a stochastic optimization problem over the design parameters for a Pareto optimal ASE-EE tradeoff with random placements of mobile users, macro base stations and mmWave cells within the network. To solve this optimization problem, we follow a deep learning approach to estimate optimal design parameters in real-time complexity. Our results show that if the deep learning model is trained with sufficient data and tuned appropriately, it yields near-optimal performance while eliminating the issue of long processing times needed for system-wide optimization. The contributions of this dissertation have the potential to cause a paradigm shift from the reactive cell-centric network design to an agile user-centric design that enables real-time optimization capabilities, ubiquitous user experience, higher system capacity and improved network-wide energy efficiency

    On the Intersection of Communication and Machine Learning

    Get PDF
    The intersection of communication and machine learning is attracting increasing interest from both communities. On the one hand, the development of modern communication system brings large amount of data and high performance requirement, which challenges the classic analytical-derivation based study philosophy and encourages the researchers to explore the data driven method, such as machine learning, to solve the problems with high complexity and large scale. On the other hand, the usage of distributed machine learning introduces the communication cost as one of the basic considerations for the design of machine learning algorithm and system.In this thesis, we first explore the application of machine learning on one of the classic problems in wireless network, resource allocation, for heterogeneous millimeter wave networks when the environment is with high dynamics. We address the practical concerns by providing the efficient online and distributed framework. In the second part, some sampling based communication-efficient distributed learning algorithm is proposed. We utilize the trade-off between the local computation and the total communication cost and propose the algorithm with good theoretical bound. In more detail, this thesis makes the following contributionsWe introduced an reinforcement learning framework to solve the resource allocation problems in heterogeneous millimeter wave network. The large state/action space is decomposed according to the topology of the network and solved by an efficient distribtued message passing algorithm. We further speed up the inference process by an online updating process.We proposed the distributed coreset based boosting framework. An efficient coreset construction algorithm is proposed based on the prior knowledge provided by clustering. Then the coreset is integrated with boosting with improved convergence rate. We extend the proposed boosting framework to the distributed setting, where the communication cost is reduced by the good approximation of coreset.We propose an selective sampling framework to construct a subset of sample that could effectively represent the model space. Based on the prior distribution of the model space or the large amount of samples from model space, we derive a computational efficient method to construct such subset by minimizing the error of classifying a classifier

    A Decentralized Pilot Assignment Algorithm for Scalable O-RAN Cell-Free Massive MIMO

    Full text link
    Radio access networks (RANs) in monolithic architectures have limited adaptability to supporting different network scenarios. Recently, open-RAN (O-RAN) techniques have begun adding enormous flexibility to RAN implementations. O-RAN is a natural architectural fit for cell-free massive multiple-input multiple-output (CFmMIMO) systems, where many geographically-distributed access points (APs) are employed to achieve ubiquitous coverage and enhanced user performance. In this paper, we address the decentralized pilot assignment (PA) problem for scalable O-RAN-based CFmMIMO systems. We propose a low-complexity PA scheme using a multi-agent deep reinforcement learning (MA-DRL) framework in which multiple learning agents perform distributed learning over the O-RAN communication architecture to suppress pilot contamination. Our approach does not require prior channel knowledge but instead relies on real-time interactions made with the environment during the learning procedure. In addition, we design a codebook search (CS) scheme that exploits the decentralization of our O-RAN CFmMIMO architecture, where different codebook sets can be utilized to further improve PA performance without any significant additional complexities. Numerical evaluations verify that our proposed scheme provides substantial computational scalability advantages and improvements in channel estimation performance compared to the state-of-the-art.Comment: This paper has been submitted to IEEE Journal on Selected Areas in Communications for possible publicatio

    Modeling network traffic on a global network-centric system with artificial neural networks

    Get PDF
    This dissertation proposes a new methodology for modeling and predicting network traffic. It features an adaptive architecture based on artificial neural networks and is especially suited for large-scale, global, network-centric systems. Accurate characterization and prediction of network traffic is essential for network resource sizing and real-time network traffic management. As networks continue to increase in size and complexity, the task has become increasingly difficult and current methodology is not sufficiently adaptable or scaleable. Current methods model network traffic with express mathematical equations which are not easily maintained or adjusted. The accuracy of these models is based on detailed characterization of the traffic stream which is measured at points along the network where the data is often subject to constant variation and rapid evolution. The main contribution of this dissertation is development of a methodology that allows utilization of artificial neural networks with increased capability for adaptation and scalability. Application on an operating global, broadband network, the Connexion by Boeingʼ network, was evaluated to establish feasibility. A simulation model was constructed and testing was conducted with operational scenarios to demonstrate applicability on the case study network and to evaluate improvements in accuracy over existing methods --Abstract, page iii
    • …
    corecore