107 research outputs found

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    EdgeAISim: A Toolkit for Simulation and Modelling of AI Models in Edge Computing Environments

    Get PDF
    To meet next-generation Internet of Things (IoT) application demands, edge computing moves processing power and storage closer to the network edge to minimize latency and bandwidth utilization. Edge computing is becoming increasingly popular as a result of these benefits, but it comes with challenges such as managing resources efficiently. Researchers are utilising Artificial Intelligence (AI) models to solve the challenge of resource management in edge computing systems. However, existing simulation tools are only concerned with typical resource management policies, not the adoption and implementation of AI models for resource management, especially. Consequently, researchers continue to face significant challenges, making it hard and time-consuming to use AI models when designing novel resource management policies for edge computing with existing simulation tools. To overcome these issues, we propose a lightweight Python-based toolkit called EdgeAISim for the simulation and modelling of AI models for designing resource management policies in edge computing environments. In EdgeAISim, we extended the basic components of the EdgeSimPy framework and developed new AI-based simulation models for task scheduling, energy management, service migration, network flow scheduling, and mobility support for edge computing environments. In EdgeAISim, we have utilized advanced AI models such as Multi-Armed Bandit with Upper Confidence Bound, Deep Q-Networks, Deep Q-Networks with Graphical Neural Network, and Actor-Critic Network to optimize power usage while efficiently managing task migration within the edge computing environment. The performance of these proposed models of EdgeAISim is compared with the baseline, which uses a worst-fit algorithm-based resource management policy in different settings. Experimental results indicate that EdgeAISim exhibits a substantial reduction in power consumption, highlighting the compelling success of power optimization strategies in EdgeAISim. The development of EdgeAISim represents a promising step towards sustainable edge computing, providing eco-friendly and energy-efficient solutions that facilitate efficient task management in edge environments for different large-scale scenarios

    Trustworthy Federated Learning: A Survey

    Full text link
    Federated Learning (FL) has emerged as a significant advancement in the field of Artificial Intelligence (AI), enabling collaborative model training across distributed devices while maintaining data privacy. As the importance of FL increases, addressing trustworthiness issues in its various aspects becomes crucial. In this survey, we provide an extensive overview of the current state of Trustworthy FL, exploring existing solutions and well-defined pillars relevant to Trustworthy . Despite the growth in literature on trustworthy centralized Machine Learning (ML)/Deep Learning (DL), further efforts are necessary to identify trustworthiness pillars and evaluation metrics specific to FL models, as well as to develop solutions for computing trustworthiness levels. We propose a taxonomy that encompasses three main pillars: Interpretability, Fairness, and Security & Privacy. Each pillar represents a dimension of trust, further broken down into different notions. Our survey covers trustworthiness challenges at every level in FL settings. We present a comprehensive architecture of Trustworthy FL, addressing the fundamental principles underlying the concept, and offer an in-depth analysis of trust assessment mechanisms. In conclusion, we identify key research challenges related to every aspect of Trustworthy FL and suggest future research directions. This comprehensive survey serves as a valuable resource for researchers and practitioners working on the development and implementation of Trustworthy FL systems, contributing to a more secure and reliable AI landscape.Comment: 45 Pages, 8 Figures, 9 Table

    Applications

    Get PDF
    Volume 3 describes how resource-aware machine learning methods and techniques are used to successfully solve real-world problems. The book provides numerous specific application examples: in health and medicine for risk modelling, diagnosis, and treatment selection for diseases in electronics, steel production and milling for quality control during manufacturing processes in traffic, logistics for smart cities and for mobile communications

    In situ Distributed Genetic Programming: An Online Learning Framework for Resource Constrained Networked Devices

    Get PDF
    This research presents In situ Distributed Genetic Programming (IDGP) as a framework for distributively evolving logic while attempting to maintain acceptable average performance on highly resource-constrained embedded networked devices. The framework is motivated by the proliferation of devices employing microcontrollers with communications capability and the absence of online learning approaches that can evolve programs for them. Swarm robotics, Internet of Things (IoT) devices including smart phones, and arguably the most constrained of the embedded systems, Wireless Sensor Networks (WSN) motes, all possess the capabilities necessary for the distributed evolution of logic - specifically the abilities of sensing, computing, actuation and communications. Genetic programming (GP) is a mechanism that can evolve logic for these devices using their “native” logic representation (i.e. programs) and so technically GP could evolve any behaviour that can be coded on the device. IDGP is designed, implemented, demonstrated and analysed as a framework for evolving logic via genetic programming on highly resource-constrained networked devices in real-world environments while achieving acceptable average performance. Designed with highly resource-constrained devices in mind, IDGP provides a guide for those wishing to implement genetic programming on such systems. Furthermore, an implementation on mote class devices is demonstrated to evolve logic for a time-varying sense-compute-act problem and another problem requiring the evolution of primitive communications. Distributed evolution of logic is also achieved by employing the Island Model architecture, and a comparison of individual and distributed evolution (with the same and slightly different goals) presented. This demonstrates the advantage of leveraging the fact that such devices often reside within networks of devices experiencing similar conditions. Since GP is a population-based metaheuristic which relies on the diversity of the population to achieve learning, many, if not most, programs within the population exhibit poor performance. As such, the average observed performance (pool fitness) of the population using the standard GP learning mechanism is unlikely to be acceptable for online learning scenarios. This is suspected to be the reason why no previous attempts have been made to deploy standard GP as an online learning approach. Nonetheless, the benefits of GP for evolving logic on such devices are compelling and motivated the design of a novel satisficing heuristic called Fitness Importance (FI). FI is population-based heuristic used to bias the evaluation of candidate solutions such that an “acceptable” average fitness (AAF) is achieved while also achieving ongoing, though diminished, learning capacity. This trade off motivated further investigation into whether dynamically adjusting the average performance in response to AAF would be superior to a constant, balanced, performing-learning approach. Dynamic and constant strategies were compared on a simple problem where the AAF target was changed during evolution, revealing that dynamically tracking the AAF target can yield a higher success rate in meeting the AAF. The combination of IDGP and FI offers a novel approach for achieving online learning with GP on highly resource-constrained embedded systems. Furthermore, it simultaneously considers the acceptable average performance of the system which may change during the operational lifetime. This approach could be applied to swarm and cooperative robot systems, WSN motes or IoT devices allowing them to cooperatively learn and adapt their logic locally to meet dynamic performance requirements
    corecore