28 research outputs found
A Bilevel Optimization Approach for Joint Offloading Decision and Resource Allocation in Cooperative Mobile Edge Computing
This paper studies a multi-user cooperative mobile edge computing offloading (CoMECO) system in a multi-user interference environment, in which delay-sensitive tasks may be executed on local devices, cooperative devices, or the primary MEC server. In this system, we jointly optimize the offloading decision and computation resource allocation for minimizing the total energy consumption of all mobile users under the delay constraint. If this problem is solved directly, the offloading decision and computation resource allocation are generally generated separately at the same time. Note, however, that they are closely coupled. Therefore, under this condition, their dependency is not well considered, thus leading to poor performance. We transform this problem into a bilevel optimization problem, in which the offloading decision is generated in the upper level, and then the optimal allocation of computation resources is obtained in the lower level based on the given offloading decision. In this way, the dependency between the offloading decision and computation resource allocation can be fully taken into account. Subsequently, a bilevel optimization approach, called BiJOR, is proposed. In BiJOR, candidate modes are first pruned to reduce the number of infeasible offloading decisions. Afterward, the upper level optimization problem is solved by ant colony system (ACS). Furthermore, a sorting strategy is incorporated into ACS to construct feasible offloading decisions with a higher probability and a local search operator is designed in ACS to accelerate the convergence. For the lower level optimization problem, it is solved by the monotonic optimization method. In addition, BiJOR is extended to deal with a complex scenario with the channel selection. Extensive experiments are carried out to investigate the performance of BiJOR on two sets of instances with up to 400 mobile users. The experimental results demonstrate the effectiveness of BiJOR and the superiority of the CoMECO system
Fundamental Limits of Caching: Symmetry Structure and Coded Placement Schemes
Caching is a technique to reduce the communication load in peak hours by prefetching contents
during off-peak hours. In 2014, Maddah-Ali and Niesen introduced a framework for coded
caching, and showed that significant improvement can be obtained compared to uncoded caching. Considerable efforts have been devoted to identify the precise information theoretic fundamental limit of such systems, however the difficulty of this task has also become clear. One of the reasons for this difficulty is that the original coded caching setting allows multiple demand types during delivery, which in fact introduces tension in the coding strategy to accommodate all of them. We seek to develop a better understanding of the fundamental limit of coded caching.
In order to characterize the fundamental limit of the tradeoff between the amount of cache
memory and the delivery transmission rate of multiuser caching systems, various coding schemes have been proposed in the literature. These schemes can largely be categorized into two classes, namely uncoded prefetching schemes and coded prefetching schemes. While uncoded prefetching schemes in general over order-wise optimal performance, coded prefetching schemes often have better performance at the low cache memory regime. At first sight it seems impossible to connect these two different types of coding schemes, yet finding a unified coding scheme that achieves the optimal memory-rate tradeoff is an important and interesting problem. We take the first step on this direction and provide a connection between the uncoded prefetching scheme proposed by Maddah Ali and Niesen (and its improved version by Yu et al.) and the coded prefetching scheme proposed by Tian and Chen. The intermediate operating points of this general scheme can in fact provide new memory-rate tradeoff points previously not known to be achievable in the literature. This new general coding scheme is then presented and analyzed rigorously, which yields a new inner bound to the memory-rate tradeoff for the caching problem.
While studying the general case can be difficult, we found that studying the single demand type
systems will provide important insights. Motivated by these findings, we focus on systems where the number of users and the number of files are the same, and the demand type is when all files are being requested. A novel coding scheme is proposed, which provides several optimal memory transmission operating points. Outer bounds for this class of systems are also considered, and their relation with existing bounds is discussed.
Outer-bounding the fundamental limits of coded caching problem is difficult, not only because
there are tons of information inequalities and problem specific equalities to choose from, but also because of identifying a useful subset (and often a quite small subset) from them and how to combine them to produce an improved outerbound is a hard problem. Information inequalities can be used to derive the fundamental limits of information systems. Many information inequalities and problem-specific constraints are linear equalities or inequalities of joint entropies, and thus outer bounding the fundamental limits can be viewed as and in principle computed through linear programming. However, for many practical engineering problems, the resultant linear program (LP) is very large, rendering such a computational approach almost completely inapplicable in practice. We provide a method to pinpoint this reduction by counting the number of orbits induced by the symmetry on the set of the LP variables and the LP constraints, respectively. We proposed a generic three-layer decomposition of the group structures for this purpose. This general approach can also be applied to various other problems such as extremal pairwise cyclically symmetric entropy inequalities and the regenerating code problem.
Decentralized coded caching is applicable in scenarios when the server is uninformed of the
number of active users and their identities in a wireless or mobile environment. We propose a
decentralized coded prefetching strategy where both prefetching and delivery are coded. The proposed strategy indeed outperforms the existing decentralized uncoded caching strategy in regimes of small cache size when the numbers of files is less than the number of users. Methods to manage the coding overhead are further suggested
Wireless networks QoS optimization using coded caching and machine learning algorithms
Proactive caching shows great potential to minimize peak traffic rates by storing
popular data, in advance, at different nodes in the network. We study three new
angles of proactive caching that were not covered before in the literature. We develop
more practical algorithms that bring proactive caching closer to practical wireless
networks.
The first angle is where the popularities of the cached files are changing over
time and the file delivery is asynchronous. We provide an algorithm that minimizes
files’ delivery rate under this setting. We show that we can use the file delivery
messages to proactively and constantly update the receiver finite caches. We show
that this mechanism reduces the downloaded traffic of the network. The proposed
scheme uses index coding [1], and app. A to jointly encodes the delivery of different
demanded files with the cache updates to other receivers to follow the changes in the
file popularities. An offline and online (dynamic) versions of the scheme are proposed,
where the offline version requires knowledge of the file popularities across the whole
transmission period in advance and the online one requires the file popularities for
one succeeding time slot only. The optimal caching for both the offline and online
schemes is obtained numerically.
The second angle is the study of segmented caching for delay minimization in
networks with congested backhaul. Studies have mainly focused on proactively storing
popular whole files. For certain categories of files like videos, this is not the best
strategy. As videos can be segmented, sending later segments of videos can be less
time-critical. Video is expected to constitute 82% of internet traffic by 2020 [2]. We
study the effect of segmenting video caching decisions under the assumption that the
backhaul is congested. We provide an algorithm for proactive segmented caching that
optimizes the choice of segments to be cached to minimize delay and compare the
performance to the whole file proactive caching.
The third angle focuses on using reinforcement learning for coded caching
in networks with changing file popularities. For such a dynamic environment,
reinforcement learning has the flexibility to learn the environment and adapt
accordingly. We develop a reinforcement learning-based coded caching algorithm
and compare its performance to rule-based coded caching
Image Restoration
This book represents a sample of recent contributions of researchers all around the world in the field of image restoration. The book consists of 15 chapters organized in three main sections (Theory, Applications, Interdisciplinarity). Topics cover some different aspects of the theory of image restoration, but this book is also an occasion to highlight some new topics of research related to the emergence of some original imaging devices. From this arise some real challenging problems related to image reconstruction/restoration that open the way to some new fundamental scientific questions closely related with the world we interact with
one6G white paper, 6G technology overview:Second Edition, November 2022
6G is supposed to address the demands for consumption of mobile networking services in 2030 and beyond. These are characterized by a variety of diverse, often conflicting requirements, from technical ones such as extremely high data rates, unprecedented scale of communicating devices, high coverage, low communicating latency, flexibility of extension, etc., to non-technical ones such as enabling sustainable growth of the society as a whole, e.g., through energy efficiency of deployed networks. On the one hand, 6G is expected to fulfil all these individual requirements, extending thus the limits set by the previous generations of mobile networks (e.g., ten times lower latencies, or hundred times higher data rates than in 5G). On the other hand, 6G should also enable use cases characterized by combinations of these requirements never seen before, e.g., both extremely high data rates and extremely low communication latency). In this white paper, we give an overview of the key enabling technologies that constitute the pillars for the evolution towards 6G. They include: terahertz frequencies (Section 1), 6G radio access (Section 2), next generation MIMO (Section 3), integrated sensing and communication (Section 4), distributed and federated artificial intelligence (Section 5), intelligent user plane (Section 6) and flexible programmable infrastructures (Section 7). For each enabling technology, we first give the background on how and why the technology is relevant to 6G, backed up by a number of relevant use cases. After that, we describe the technology in detail, outline the key problems and difficulties, and give a comprehensive overview of the state of the art in that technology. 6G is, however, not limited to these seven technologies. They merely present our current understanding of the technological environment in which 6G is being born. Future versions of this white paper may include other relevant technologies too, as well as discuss how these technologies can be glued together in a coherent system
Learning-based Decision Making in Wireless Communications
Fueled by emerging applications and exponential increase in data traffic, wireless networks have recently grown significantly and become more complex. In such large-scale complex wireless networks, it is challenging and, oftentimes, infeasible for conventional optimization methods to quickly solve critical decision-making problems. With this motivation, in this thesis, machine learning methods are developed and utilized for obtaining optimal/near-optimal solutions for timely decision making in wireless networks.
Content caching at the edge nodes is a promising technique to reduce the data traffic in next-generation wireless networks. In this context, we in the first part of the thesis study content caching at the wireless network edge using a deep reinforcement learning framework with Wolpertinger architecture. Initially, we develop a learning-based caching policy for a single base station aiming at maximizing the long-term cache hit rate. Then, we extend this study to a wireless communication network with multiple edge nodes. In particular, we propose deep actor-critic reinforcement learning based policies for both centralized and decentralized content caching.
Next, with the purpose of making efficient use of limited spectral resources, we develop a deep actor-critic reinforcement learning based framework for dynamic multichannel access. We consider both a single-user case and a scenario in which multiple users attempt to access channels simultaneously. In the single-user model, in order to evaluate the performance of the proposed channel access policy and the framework\u27s tolerance against uncertainty, we explore different channel switching patterns and different switching probabilities. In the case of multiple users, we analyze the probabilities of each user accessing channels with favorable channel conditions and the probability of collision.
Following the analysis of the proposed learning-based dynamic multichannel access policy, we consider adversarial attacks on it. In particular, we propose two adversarial policies, one based on feed-forward neural networks and the other based on deep reinforcement learning policies. Both attack strategies aim at minimizing the accuracy of a deep reinforcement learning based dynamic channel access agent, and we demonstrate and compare their performances.
Next, anomaly detection as an active hypothesis test problem is studied. Specifically, we study deep reinforcement learning based active sequential testing for anomaly detection. We assume that there is an unknown number of abnormal processes at a time and the agent can only check with one sensor in each sampling step. To maximize the confidence level of the decision and minimize the stopping time concurrently, we propose a deep actor-critic reinforcement learning framework that can dynamically select the sensor based on the posterior probabilities. Separately, we also regard the detection of threshold crossing as an anomaly detection problem, and analyze it via hierarchical generative adversarial networks (GANs).
In the final part of the thesis, to address state estimation and detection problems in the presence of noisy sensor observations and probing costs, we develop a soft actor-critic deep reinforcement learning framework. Moreover, considering Byzantine attacks, we design a GAN-based framework to identify the Byzantine sensors. To evaluate the proposed framework, we measure the performance in terms of detection accuracy, stopping time, and the total probing cost needed for detection
Optimization and Communication in UAV Networks
UAVs are becoming a reality and attract increasing attention. They can be remotely controlled or completely autonomous and be used alone or as a fleet and in a large set of applications. They are constrained by hardware since they cannot be too heavy and rely on batteries. Their use still raises a large set of exciting new challenges in terms of trajectory optimization and positioning when they are used alone or in cooperation, and communication when they evolve in swarm, to name but a few examples. This book presents some new original contributions regarding UAV or UAV swarm optimization and communication aspects
Recommended from our members
Efficient Spectrum Sensing and Sharing Techniques for Dynamic Wideband Spectrum Access
Besides enabling an enhanced mobile broadband access, fifth-generation (5G) wireless mobile networks are envisioned to support the connectivity of massive, heterogeneous Internet of Things (IoT) devices. Connecting these devices through 5G systems and providing them with their needed data rates require huge amounts of spectrum and power resources, thus calling for the development and design of innovative, dynamic resource identification, access and sharing methods that make effective use of these limited resources. This thesis focuses specifically on wideband spectrum sensing, and presents innovative techniques that enable efficient identification and recovery of unused spectrum opportunities in wideband dynamic spectrum access. Recent research efforts have focused on leveraging compressive sampling (CS) theory to enable wideband spectrum sensing recovery at sub-Nyquist rates. However, these approaches suffer from the following shortcomings. First, they consider homogenous wideband spectrum, where all
bands are assumed to have similar primary users (PU)s traffic characteristics whereas in practice, the wideband spectrum occupancy is heterogeneous. Second, the number of measurements that receiver hardware designs are able to perform is practically way smaller than the number of measurements required by the CS-based sensing approaches. Third, the number of measurements required by the CS-based sensing approaches depends on the number of occupied bands (i.e., sparsity level), which is often unknown
in advance and changes over time. Forth, current wideband spectrum databases suffer from scalability issues in that they incur lots of sensing overhead. This thesis proposes a set of new, complementary techniques that overcome these aforementioned challenges. More specifically, in this thesis,
1. We design efficient spectrum occupancy information recovery techniques for heterogeneous wideband spectrum access. Our proposed techniques exploit the block-like structure of spectrum occupancy behavior observed in wideband spectrum access networks to enable the development of compressed spectrum sensing algorithms. Our proposed spectrum sensing algorithms achieve more stable spectrum information
recovery than that achieved by existing approaches.
2. We develop distributed CS-based spectrum sensing techniques for cooperative wideband spectrum access that require lesser measurements while overcoming time-variability of spectrum occupancy and addressing hidden terminal challenges. Also, we propose non-uniform sensing matrices design that exploits the heterogeneity in the wideband spectrum access to further improve the spectrum sensing recovery
accuracy.
3. We develop scalable spectrum occupancy information recovery techniques for database-driven wideband spectrum access networks. The novelty of our developed techniques lies in combining the merit of compressive sampling theory with that of low-rank matrix theory to enable scalable and accurate wideband spectrum occupancy recovery at low sensing overhead.
4. We propose joint data and energy transfer optimization frameworks for powering mobile cellular devices through RF energy harvesting. Our proposed framework accounts for both the consumed power at the base station and the battery power available at the end users to ensure that end users achieve their required data rates with as little battery power consumption as possible. We also analytically derive closed-form expressions of the optimal power allocations required for meeting the data rate requirements of the downlink and uplink communications between the base station and its mobile users
Leveraging Machine Learning Techniques towards Intelligent Networking Automation
In this thesis, we address some of the challenges that the Intelligent Networking Automation (INA) paradigm poses. Our goal is to design schemes leveraging Machine Learning (ML) techniques to cope with situations that involve hard decision-making actions. The proposed solutions are data-driven and consist of an agent that operates at network elements such as routers, switches, or network servers. The data are gathered from realistic scenarios, either actual network deployments or emulated environments. To evaluate the enhancements that the designed schemes provide, we compare our solutions to non-intelligent ones. Additionally, we assess the trade-off between the obtained improvements and the computational costs of implementing the proposed mechanisms.
Accordingly, this thesis tackles the challenges that four specific research problems present. The first topic addresses the problem of balancing traffic in dense Internet of Things (IoT) network scenarios where the end devices and the Base Stations (BSs) form complex networks. By applying ML techniques to discover patterns in the association between the end devices and the BSs, the proposed scheme can balance the traffic load in a IoT network to increase the packet delivery ratio and reduce the energy cost of data delivery. The second research topic proposes an intelligent congestion control for internet connections at edge network elements. The design includes a congestion predictor based on an Artificial Neural Network (ANN) and an Active Queue Management (AQM) parameter tuner. Similarly, the third research topic includes an intelligent solution to the inter-domain congestion. Different from second topic, this problem considers the preservation of the private network data by means of Federated Learning (FL), since network elements of several organizations participate in the intelligent process. Finally, the fourth research topic refers to a framework to efficiently gathering network telemetry (NT) data. The proposed solution considers a traffic-aware approach so that the NT is intelligently collected and transmitted by the network elements.
All the proposed schemes are evaluated through use cases considering standardized networking mechanisms. Therefore, we envision that the solutions of these specific problems encompass a set of methods that can be utilized in real-world scenarios towards the realization of the INA paradigm
Coded caching: Information theoretic bounds and asynchronism
Caching is often used in content delivery networks as a mechanism for reducing network traffic. Recently, the technique of coded caching was introduced whereby coding in the caches and coded transmission signals from the central server were considered. Prior results in this area demonstrate that carefully designing the placement of content in the caches and designing appropriate coded delivery signals from the server allow for a system where the delivery rates can be significantly smaller than conventional schemes.
However, matching upper and lower bounds on the transmission rate have not yet been obtained. In the first part of this thesis we derive tighter lower bounds on the coded caching rate than were known previously. We demonstrate that this problem can equivalently be posed as a combinatorial problem of optimally labeling the leaves of a directed tree. Our proposed labeling algorithm allows for significantly improved lower bounds on the coded caching rate. Furthermore, we study certain structural properties of our algorithm that allow us to analytically quantify improvements on the rate lower bound for general values of the problem parameters. This allows us to obtain a multiplicative gap of at most four between the achievable rate and our lower bound.
The original formulation of the coded caching problem assumes that the file requests from the users are synchronized, i.e., they arrive at the server at the same time. Several subsequent contributions work under the same assumption. Furthermore, the majority of prior work does not consider a scenario where users have deadlines. In the second part of this thesis we formulate the asynchronous coded caching problem where user requests arrive at different times. Furthermore, the users have specified deadlines. We propose a linear program for obtaining its optimal solution. However, the size of the LP (number of constraints and variables) grows rather quickly with the number of users and cache sizes. To deal with this problem, we explore a dual decomposition based approach for solving the LP under consideration. We demonstrate that the dual function can be evaluated by equivalently solving a number of minimum cost network flow algorithms.
Moreover, we consider the asynchronous setting where the file requests are revealed to the server in an online fashion. We propose a novel online algorithm for this problem building on our prior work for the offline setting (where the server knows the request arrival times and deadlines in advance). Our simulation results demonstrate that our proposed online algorithm allows for a natural tradeoff between the feasibility of the schedule and the rate gains of coded caching