29 research outputs found

    Federated Learning in Intelligent Transportation Systems: Recent Applications and Open Problems

    Full text link
    Intelligent transportation systems (ITSs) have been fueled by the rapid development of communication technologies, sensor technologies, and the Internet of Things (IoT). Nonetheless, due to the dynamic characteristics of the vehicle networks, it is rather challenging to make timely and accurate decisions of vehicle behaviors. Moreover, in the presence of mobile wireless communications, the privacy and security of vehicle information are at constant risk. In this context, a new paradigm is urgently needed for various applications in dynamic vehicle environments. As a distributed machine learning technology, federated learning (FL) has received extensive attention due to its outstanding privacy protection properties and easy scalability. We conduct a comprehensive survey of the latest developments in FL for ITS. Specifically, we initially research the prevalent challenges in ITS and elucidate the motivations for applying FL from various perspectives. Subsequently, we review existing deployments of FL in ITS across various scenarios, and discuss specific potential issues in object recognition, traffic management, and service providing scenarios. Furthermore, we conduct a further analysis of the new challenges introduced by FL deployment and the inherent limitations that FL alone cannot fully address, including uneven data distribution, limited storage and computing power, and potential privacy and security concerns. We then examine the existing collaborative technologies that can help mitigate these challenges. Lastly, we discuss the open challenges that remain to be addressed in applying FL in ITS and propose several future research directions

    Ultra-Dense Mobile Networks: Optimal Design and Communications Strategies

    Get PDF
    This thesis conducts an extensive analysis within the mobile telecommunications sub-field of the ultra-dense mobile networks, in which a massive deployment of network’s pieces of equipment is assumed. Future cache-enabled mobile networks are expected to meet most of the generated content demands directly at the edge, where each node has the availability to proactively store a set of contents in a local memory. This thesis makes several important contributions. The research being presented in this thesis proposes new analytical expressions to modeling the performance associated to the network’s edge. Base-stations’ idling technologies are also investigated to temporarily turn off some network nodes, saving energy and, in some circumstances, improving the overall performance by contributing less interference at the network’s edge. On the other hand, making use of fewer base-stations however reduces the amount of available resources at the network’s edge. A trade-off is investigated, which balances among interference saturation and available resources to increase the average user’s quality of experience. In this work, we treat the edge node density as a variable of the problem. This greatly increases the difficulty of obtaining analytical expressions, but also offers a direct access for optimizing the users’ average performance and network’s energy consumptions. An energy-focused performance metric is subsequently proposed, with the intention to highlight an interesting duality within the same network’s tier, which can transition from a better efficient to a more performing state, according to the energy expenses from the operators. Nonetheless, under an ultra-dense scenario, line-of-sight wireless links between the user and the nodes become more likely. The introduction of a main component of the multi-path propagated copies of a signal involves analytical complications. A feasible approximation is proposed and validated through a set of computer simulations. The scalability of the proposed technique allows to generalise existing results in the literature

    Optimizing Resource Allocation with Energy Efficiency and Backhaul Challenges

    Get PDF
    To meet the requirements of future wireless mobile communication which aims to increase the data rates, coverage and reliability while reducing energy consumption and latency, and also deal with the explosive mobile traffic growth which imposes high demands on backhaul for massive content delivery, developing green communication and reducing the backhaul requirements have become two significant trends. One of the promising techniques to provide green communication is wireless power transfer (WPT) which facilitates energy-efficient architectures, e.g. simultaneous wireless information and power transfer (SWIPT). Edge caching, on the other side, brings content closer to the users by storing popular content in caches installed at the network edge to reduce peak-time traffic, backhaul cost and latency. In this thesis, we focus on the resource allocation technology for emerging network architectures, i.e. the SWIPT-enabled multiple-antenna systems and cache-enabled cellular systems, to tackle the challenges of limited resources such as insufficient energy supply and backhaul capacity. We start with the joint design of beamforming and power transfer ratios for SWIPT in MISO broadcast channels and MIMO relay systems, respectively, aiming for maximizing the energy efficiency subject to both the Quality of Service (QoS) constraints and energy harvesting constraints. Then move to the content placement optimization for cache-enabled heterogeneous small cell networks so as to minimize the backhaul requirements. In particular, we enable multicast content delivery and cooperative content sharing utilizing maximum distance separable (MDS) codes to provide further caching gains. Both analysis and simulation results are provided throughout the thesis to demonstrate the benefits of the proposed algorithms over the state-of-the-art methods

    Learning-based Decision Making in Wireless Communications

    Get PDF
    Fueled by emerging applications and exponential increase in data traffic, wireless networks have recently grown significantly and become more complex. In such large-scale complex wireless networks, it is challenging and, oftentimes, infeasible for conventional optimization methods to quickly solve critical decision-making problems. With this motivation, in this thesis, machine learning methods are developed and utilized for obtaining optimal/near-optimal solutions for timely decision making in wireless networks. Content caching at the edge nodes is a promising technique to reduce the data traffic in next-generation wireless networks. In this context, we in the first part of the thesis study content caching at the wireless network edge using a deep reinforcement learning framework with Wolpertinger architecture. Initially, we develop a learning-based caching policy for a single base station aiming at maximizing the long-term cache hit rate. Then, we extend this study to a wireless communication network with multiple edge nodes. In particular, we propose deep actor-critic reinforcement learning based policies for both centralized and decentralized content caching. Next, with the purpose of making efficient use of limited spectral resources, we develop a deep actor-critic reinforcement learning based framework for dynamic multichannel access. We consider both a single-user case and a scenario in which multiple users attempt to access channels simultaneously. In the single-user model, in order to evaluate the performance of the proposed channel access policy and the framework\u27s tolerance against uncertainty, we explore different channel switching patterns and different switching probabilities. In the case of multiple users, we analyze the probabilities of each user accessing channels with favorable channel conditions and the probability of collision. Following the analysis of the proposed learning-based dynamic multichannel access policy, we consider adversarial attacks on it. In particular, we propose two adversarial policies, one based on feed-forward neural networks and the other based on deep reinforcement learning policies. Both attack strategies aim at minimizing the accuracy of a deep reinforcement learning based dynamic channel access agent, and we demonstrate and compare their performances. Next, anomaly detection as an active hypothesis test problem is studied. Specifically, we study deep reinforcement learning based active sequential testing for anomaly detection. We assume that there is an unknown number of abnormal processes at a time and the agent can only check with one sensor in each sampling step. To maximize the confidence level of the decision and minimize the stopping time concurrently, we propose a deep actor-critic reinforcement learning framework that can dynamically select the sensor based on the posterior probabilities. Separately, we also regard the detection of threshold crossing as an anomaly detection problem, and analyze it via hierarchical generative adversarial networks (GANs). In the final part of the thesis, to address state estimation and detection problems in the presence of noisy sensor observations and probing costs, we develop a soft actor-critic deep reinforcement learning framework. Moreover, considering Byzantine attacks, we design a GAN-based framework to identify the Byzantine sensors. To evaluate the proposed framework, we measure the performance in terms of detection accuracy, stopping time, and the total probing cost needed for detection

    An event-aware cluster-head rotation algorithm for extending lifetime of wireless sensor Network with smart nodes

    Get PDF
    Smart sensor nodes can process data collected from sensors, make decisions, and recognize relevant events based on the sensed information before sharing it with other nodes. In wireless sensor networks, the smart sensor nodes are usually grouped in clusters for effective cooperation. One sensor node in each cluster must act as a cluster head. The cluster head depletes its energy resources faster than the other nodes. Thus, the cluster-head role must be periodically reassigned (rotated) to different sensor nodes to achieve a long lifetime of wireless sensor network. This paper introduces a method for extending the lifetime of the wireless sensor networks with smart nodes. The proposed method combines a new algorithm for rotating the cluster-head role among sensor nodes with suppression of unnecessary data transmissions. It enables effective control of the cluster-head rotation based on expected energy consumption of sensor nodes. The energy consumption is estimated using a lightweight model, which takes into account transmission probabilities. This method was implemented in a prototype of wireless sensor network. During experimental evaluation of the new method, detailed measurements of lifetime and energy consumption were conducted for a real wireless sensor network. Results of these realistic experiments have revealed that the lifetime of the sensor network is extended when using the proposed method in comparison with state-of-the-art cluster-head rotation algorithms

    one6G white paper, 6G technology overview:Second Edition, November 2022

    Get PDF
    6G is supposed to address the demands for consumption of mobile networking services in 2030 and beyond. These are characterized by a variety of diverse, often conflicting requirements, from technical ones such as extremely high data rates, unprecedented scale of communicating devices, high coverage, low communicating latency, flexibility of extension, etc., to non-technical ones such as enabling sustainable growth of the society as a whole, e.g., through energy efficiency of deployed networks. On the one hand, 6G is expected to fulfil all these individual requirements, extending thus the limits set by the previous generations of mobile networks (e.g., ten times lower latencies, or hundred times higher data rates than in 5G). On the other hand, 6G should also enable use cases characterized by combinations of these requirements never seen before, e.g., both extremely high data rates and extremely low communication latency). In this white paper, we give an overview of the key enabling technologies that constitute the pillars for the evolution towards 6G. They include: terahertz frequencies (Section 1), 6G radio access (Section 2), next generation MIMO (Section 3), integrated sensing and communication (Section 4), distributed and federated artificial intelligence (Section 5), intelligent user plane (Section 6) and flexible programmable infrastructures (Section 7). For each enabling technology, we first give the background on how and why the technology is relevant to 6G, backed up by a number of relevant use cases. After that, we describe the technology in detail, outline the key problems and difficulties, and give a comprehensive overview of the state of the art in that technology. 6G is, however, not limited to these seven technologies. They merely present our current understanding of the technological environment in which 6G is being born. Future versions of this white paper may include other relevant technologies too, as well as discuss how these technologies can be glued together in a coherent system

    Impacts of Mobility Models on RPL-Based Mobile IoT Infrastructures: An Evaluative Comparison and Survey

    Get PDF
    With the widespread use of IoT applications and the increasing trend in the number of connected smart devices, the concept of routing has become very challenging. In this regard, the IPv6 Routing Protocol for Low-power and Lossy Networks (PRL) was standardized to be adopted in IoT networks. Nevertheless, while mobile IoT domains have gained significant popularity in recent years, since RPL was fundamentally designed for stationary IoT applications, it could not well adjust with the dynamic fluctuations in mobile applications. While there have been a number of studies on tuning RPL for mobile IoT applications, but still there is a high demand for more efforts to reach a standard version of this protocol for such applications. Accordingly, in this survey, we try to conduct a precise and comprehensive experimental study on the impact of various mobility models on the performance of a mobility-aware RPL to help this process. In this regard, a complete and scrutinized survey of the mobility models has been presented to be able to fairly justify and compare the outcome results. A significant set of evaluations has been conducted via precise IoT simulation tools to monitor and compare the performance of the network and its IoT devices in mobile RPL-based IoT applications under the presence of different mobility models from different perspectives including power consumption, reliability, latency, and control packet overhead. This will pave the way for researchers in both academia and industry to be able to compare the impact of various mobility models on the functionality of RPL, and consequently to design and implement application-specific and even a standard version of this protocol, which is capable of being employed in mobile IoT applications

    Fundamental Limits of Caching: Symmetry Structure and Coded Placement Schemes

    Get PDF
    Caching is a technique to reduce the communication load in peak hours by prefetching contents during off-peak hours. In 2014, Maddah-Ali and Niesen introduced a framework for coded caching, and showed that significant improvement can be obtained compared to uncoded caching. Considerable efforts have been devoted to identify the precise information theoretic fundamental limit of such systems, however the difficulty of this task has also become clear. One of the reasons for this difficulty is that the original coded caching setting allows multiple demand types during delivery, which in fact introduces tension in the coding strategy to accommodate all of them. We seek to develop a better understanding of the fundamental limit of coded caching. In order to characterize the fundamental limit of the tradeoff between the amount of cache memory and the delivery transmission rate of multiuser caching systems, various coding schemes have been proposed in the literature. These schemes can largely be categorized into two classes, namely uncoded prefetching schemes and coded prefetching schemes. While uncoded prefetching schemes in general over order-wise optimal performance, coded prefetching schemes often have better performance at the low cache memory regime. At first sight it seems impossible to connect these two different types of coding schemes, yet finding a unified coding scheme that achieves the optimal memory-rate tradeoff is an important and interesting problem. We take the first step on this direction and provide a connection between the uncoded prefetching scheme proposed by Maddah Ali and Niesen (and its improved version by Yu et al.) and the coded prefetching scheme proposed by Tian and Chen. The intermediate operating points of this general scheme can in fact provide new memory-rate tradeoff points previously not known to be achievable in the literature. This new general coding scheme is then presented and analyzed rigorously, which yields a new inner bound to the memory-rate tradeoff for the caching problem. While studying the general case can be difficult, we found that studying the single demand type systems will provide important insights. Motivated by these findings, we focus on systems where the number of users and the number of files are the same, and the demand type is when all files are being requested. A novel coding scheme is proposed, which provides several optimal memory transmission operating points. Outer bounds for this class of systems are also considered, and their relation with existing bounds is discussed. Outer-bounding the fundamental limits of coded caching problem is difficult, not only because there are tons of information inequalities and problem specific equalities to choose from, but also because of identifying a useful subset (and often a quite small subset) from them and how to combine them to produce an improved outerbound is a hard problem. Information inequalities can be used to derive the fundamental limits of information systems. Many information inequalities and problem-specific constraints are linear equalities or inequalities of joint entropies, and thus outer bounding the fundamental limits can be viewed as and in principle computed through linear programming. However, for many practical engineering problems, the resultant linear program (LP) is very large, rendering such a computational approach almost completely inapplicable in practice. We provide a method to pinpoint this reduction by counting the number of orbits induced by the symmetry on the set of the LP variables and the LP constraints, respectively. We proposed a generic three-layer decomposition of the group structures for this purpose. This general approach can also be applied to various other problems such as extremal pairwise cyclically symmetric entropy inequalities and the regenerating code problem. Decentralized coded caching is applicable in scenarios when the server is uninformed of the number of active users and their identities in a wireless or mobile environment. We propose a decentralized coded prefetching strategy where both prefetching and delivery are coded. The proposed strategy indeed outperforms the existing decentralized uncoded caching strategy in regimes of small cache size when the numbers of files is less than the number of users. Methods to manage the coding overhead are further suggested

    Fundamental Limits of Caching: Symmetry Structure and Coded Placement Schemes

    Get PDF
    Caching is a technique to reduce the communication load in peak hours by prefetching contents during off-peak hours. In 2014, Maddah-Ali and Niesen introduced a framework for coded caching, and showed that significant improvement can be obtained compared to uncoded caching. Considerable efforts have been devoted to identify the precise information theoretic fundamental limit of such systems, however the difficulty of this task has also become clear. One of the reasons for this difficulty is that the original coded caching setting allows multiple demand types during delivery, which in fact introduces tension in the coding strategy to accommodate all of them. We seek to develop a better understanding of the fundamental limit of coded caching. In order to characterize the fundamental limit of the tradeoff between the amount of cache memory and the delivery transmission rate of multiuser caching systems, various coding schemes have been proposed in the literature. These schemes can largely be categorized into two classes, namely uncoded prefetching schemes and coded prefetching schemes. While uncoded prefetching schemes in general over order-wise optimal performance, coded prefetching schemes often have better performance at the low cache memory regime. At first sight it seems impossible to connect these two different types of coding schemes, yet finding a unified coding scheme that achieves the optimal memory-rate tradeoff is an important and interesting problem. We take the first step on this direction and provide a connection between the uncoded prefetching scheme proposed by Maddah Ali and Niesen (and its improved version by Yu et al.) and the coded prefetching scheme proposed by Tian and Chen. The intermediate operating points of this general scheme can in fact provide new memory-rate tradeoff points previously not known to be achievable in the literature. This new general coding scheme is then presented and analyzed rigorously, which yields a new inner bound to the memory-rate tradeoff for the caching problem. While studying the general case can be difficult, we found that studying the single demand type systems will provide important insights. Motivated by these findings, we focus on systems where the number of users and the number of files are the same, and the demand type is when all files are being requested. A novel coding scheme is proposed, which provides several optimal memory transmission operating points. Outer bounds for this class of systems are also considered, and their relation with existing bounds is discussed. Outer-bounding the fundamental limits of coded caching problem is difficult, not only because there are tons of information inequalities and problem specific equalities to choose from, but also because of identifying a useful subset (and often a quite small subset) from them and how to combine them to produce an improved outerbound is a hard problem. Information inequalities can be used to derive the fundamental limits of information systems. Many information inequalities and problem-specific constraints are linear equalities or inequalities of joint entropies, and thus outer bounding the fundamental limits can be viewed as and in principle computed through linear programming. However, for many practical engineering problems, the resultant linear program (LP) is very large, rendering such a computational approach almost completely inapplicable in practice. We provide a method to pinpoint this reduction by counting the number of orbits induced by the symmetry on the set of the LP variables and the LP constraints, respectively. We proposed a generic three-layer decomposition of the group structures for this purpose. This general approach can also be applied to various other problems such as extremal pairwise cyclically symmetric entropy inequalities and the regenerating code problem. Decentralized coded caching is applicable in scenarios when the server is uninformed of the number of active users and their identities in a wireless or mobile environment. We propose a decentralized coded prefetching strategy where both prefetching and delivery are coded. The proposed strategy indeed outperforms the existing decentralized uncoded caching strategy in regimes of small cache size when the numbers of files is less than the number of users. Methods to manage the coding overhead are further suggested
    corecore