556 research outputs found

    A Tutorial on Clique Problems in Communications and Signal Processing

    Full text link
    Since its first use by Euler on the problem of the seven bridges of K\"onigsberg, graph theory has shown excellent abilities in solving and unveiling the properties of multiple discrete optimization problems. The study of the structure of some integer programs reveals equivalence with graph theory problems making a large body of the literature readily available for solving and characterizing the complexity of these problems. This tutorial presents a framework for utilizing a particular graph theory problem, known as the clique problem, for solving communications and signal processing problems. In particular, the paper aims to illustrate the structural properties of integer programs that can be formulated as clique problems through multiple examples in communications and signal processing. To that end, the first part of the tutorial provides various optimal and heuristic solutions for the maximum clique, maximum weight clique, and kk-clique problems. The tutorial, further, illustrates the use of the clique formulation through numerous contemporary examples in communications and signal processing, mainly in maximum access for non-orthogonal multiple access networks, throughput maximization using index and instantly decodable network coding, collision-free radio frequency identification networks, and resource allocation in cloud-radio access networks. Finally, the tutorial sheds light on the recent advances of such applications, and provides technical insights on ways of dealing with mixed discrete-continuous optimization problems

    A low complexity resource allocation algorithm for multicast service delivery in OFDMA networks

    Get PDF
    Allocating and managing radio resources to multicast transmissions in Orthogonal Frequency-Division Multiple Access (OFDMA) systems is the challenging research issue addressed by this paper. A subgrouping technique, which divides the subscribers into subgroups according to the experienced channel quality, is considered to overcome the throughput limitations of conventional multicast data delivery schemes. A low complexity algorithm, designed to work with different resource allocation strategies, is also proposed to reduce the computational complexity of the subgroup formation problem. Simulation results, carried out by considering the Long Term Evolution (LTE) system based on OFDMA, testify the effectiveness of the proposed solution, which achieves a near-optimal performance with a limited computational load for the system

    Networking - A Statistical Physics Perspective

    Get PDF
    Efficient networking has a substantial economic and societal impact in a broad range of areas including transportation systems, wired and wireless communications and a range of Internet applications. As transportation and communication networks become increasingly more complex, the ever increasing demand for congestion control, higher traffic capacity, quality of service, robustness and reduced energy consumption require new tools and methods to meet these conflicting requirements. The new methodology should serve for gaining better understanding of the properties of networking systems at the macroscopic level, as well as for the development of new principled optimization and management algorithms at the microscopic level. Methods of statistical physics seem best placed to provide new approaches as they have been developed specifically to deal with non-linear large scale systems. This paper aims at presenting an overview of tools and methods that have been developed within the statistical physics community and that can be readily applied to address the emerging problems in networking. These include diffusion processes, methods from disordered systems and polymer physics, probabilistic inference, which have direct relevance to network routing, file and frequency distribution, the exploration of network structures and vulnerability, and various other practical networking applications.Comment: (Review article) 71 pages, 14 figure

    Raptorq-Based Multihop File Broadcast Protocol

    Get PDF
    The objective of this thesis is to describe and implement a RaptorQ broadcast protocol application layer designed for use in a wireless multihop network. The RaptorQ broadcast protocol is a novel application layer broadcast protocol based on RaptorQ forward error correction. This protocol can deliver a file reliably to a large number of nodes in a wireless multihop network even if the links have high loss rates. We use mixed integer programming with power balance constraints to construct broadcast trees that are suitable for implementing the RaptorQ-based broadcast protocol. The resulting broadcast tree facilitates deployment of mechanisms for verifying successful delivery. We use the Qualcomm proprietary RaptorQ software development kit library as well as a Ruby interface to implement the protocol. During execution, each node operates in one of main modes: source, transmitter, or leaf. Each mode has five different phases: STARTUP, FINISHING (Poll), FINISHING (Wait), FINISHING (Extra), and COMPLETED. Three threads are utilized to implement the RaptorQ-based broadcast protocol features. Thread 1 receives messages and passes them to the receive buffer. Thread 2 evaluates the received message, which can be NORM, POLL, MORE, and DONE, and passes the response message to the send buffer. Thread 3 multicasts the content of the send buffer. Results obtained by testing the implementation of the RaptorQ-based broadcast protocol demonstrate that efficient and reliable distribution of files over multihop wireless networks with a high link loss rates is feasible

    Client Selection for Federated Learning with Heterogeneous Resources in Mobile Edge

    Full text link
    We envision a mobile edge computing (MEC) framework for machine learning (ML) technologies, which leverages distributed client data and computation resources for training high-performance ML models while preserving client privacy. Toward this future goal, this work aims to extend Federated Learning (FL), a decentralized learning framework that enables privacy-preserving training of models, to work with heterogeneous clients in a practical cellular network. The FL protocol iteratively asks random clients to download a trainable model from a server, update it with own data, and upload the updated model to the server, while asking the server to aggregate multiple client updates to further improve the model. While clients in this protocol are free from disclosing own private data, the overall training process can become inefficient when some clients are with limited computational resources (i.e. requiring longer update time) or under poor wireless channel conditions (longer upload time). Our new FL protocol, which we refer to as FedCS, mitigates this problem and performs FL efficiently while actively managing clients based on their resource conditions. Specifically, FedCS solves a client selection problem with resource constraints, which allows the server to aggregate as many client updates as possible and to accelerate performance improvement in ML models. We conducted an experimental evaluation using publicly-available large-scale image datasets to train deep neural networks on MEC environment simulations. The experimental results show that FedCS is able to complete its training process in a significantly shorter time compared to the original FL protocol

    Throughput Optimization in Multi-hop Wireless Networks with Random Access

    Get PDF
    This research investigates cross-layer design in multi-hop wireless networks with random access. Due to the complexity of the problem, we study cross-layer design with a simple slotted ALOHA medium access control (MAC) protocol without considering any network dynamics. Firstly, we study the optimal joint configuration of routing and MAC parameters in slotted ALOHA based wireless networks under a signal to interference plus noise ratio based physical interference model. We formulate a joint routing and MAC (JRM) optimization problem under a saturation assumption to determine the optimal max-min throughput of the flows and the optimal configuration of routing and MAC parameters. The JRM optimization problem is a complex non-convex problem. We solve it by an iterated optimal search (IOS) technique and validate our model via simulation. Via numerical and simulation results, we show that JRM design provides a significant throughput gain over a default configuration in a slotted ALOHA based wireless network. Next, we study the optimal joint configuration of routing, MAC, and network coding in wireless mesh networks using an XOR-like network coding without opportunistic listening. We reformulate the JRM optimization problem to include the simple network coding and obtain a more complex non-convex problem. Similar to the JRM problem, we solve it by the IOS technique and validate our model via simulation. Numerical and simulation results for different networks illustrate that (i) the jointly optimized configuration provides a remarkable throughput gain with respect to a default configuration in a slotted ALOHA system with network coding and (ii) the throughput gain obtained by the simple network coding is significant, especially at low transmission power, i.e., the gain obtained by jointly optimizing routing, MAC, and network coding is significant even when compared to an optimized network without network coding. We then show that, in a mesh network, a significant fraction of the throughput gain for network coding can be obtained by limiting network coding to nodes directly adjacent to the gateway. Next, we propose simple heuristics to configure slotted ALOHA based wireless networks without and with network coding. These heuristics are extensively evaluated via simulation and found to be very efficient. We also formulate problems to jointly configure not only the routing and MAC parameters but also the transmission rate parameters in multi-rate slotted ALOHA systems without and with network coding. We compare the performance of multi-rate and single rate systems via numerical results. We model the energy consumption in terms of slotted ALOHA system parameters. We found out that the energy consumption for various cross-layer systems, i.e., single rate and multi-rate slotted ALOHA systems without and with network coding, are very close
    • …
    corecore