1,143 research outputs found

    How Much Can D2D Communication Reduce Content Delivery Latency in Fog Networks with Edge Caching?

    Get PDF
    A Fog-Radio Access Network (F-RAN) is studied in which cache-enabled Edge Nodes (ENs) with dedicated fronthaul connections to the cloud aim at delivering contents to mobile users. Using an information-theoretic approach, this work tackles the problem of quantifying the potential latency reduction that can be obtained by enabling Device-to-Device (D2D) communication over out-of-band broadcast links. Following prior work, the Normalized Delivery Time (NDT) --- a metric that captures the high signal-to-noise ratio worst-case latency --- is adopted as the performance criterion of interest. Joint edge caching, downlink transmission, and D2D communication policies based on compress-and-forward are proposed that are shown to be information-theoretically optimal to within a constant multiplicative factor of two for all values of the problem parameters, and to achieve the minimum NDT for a number of special cases. The analysis provides insights on the role of D2D cooperation in improving the delivery latency.Comment: Submitted to the IEEE Transactions on Communication

    Automatic synthesis and optimization of chip multiprocessors

    Get PDF
    The microprocessor technology has experienced an enormous growth during the last decades. Rapid downscale of the CMOS technology has led to higher operating frequencies and performance densities, facing the fundamental issue of power dissipation. Chip Multiprocessors (CMPs) have become the latest paradigm to improve the power-performance efficiency of computing systems by exploiting the parallelism inherent in applications. Industrial and prototype implementations have already demonstrated the benefits achieved by CMPs with hundreds of cores.CMP architects are challenged to take many complex design decisions. Only a few of them are:- What should be the ratio between the core and cache areas on a chip?- Which core architectures to select?- How many cache levels should the memory subsystem have?- Which interconnect topologies provide efficient on-chip communication?These and many other aspects create a complex multidimensional space for architectural exploration. Design Automation tools become essential to make the architectural exploration feasible under the hard time-to-market constraints. The exploration methods have to be efficient and scalable to handle future generation on-chip architectures with hundreds or thousands of cores.Furthermore, once a CMP has been fabricated, the need for efficient deployment of the many-core processor arises. Intelligent techniques for task mapping and scheduling onto CMPs are necessary to guarantee the full usage of the benefits brought by the many-core technology. These techniques have to consider the peculiarities of the modern architectures, such as availability of enhanced power saving techniques and presence of complex memory hierarchies.This thesis has several objectives. The first objective is to elaborate the methods for efficient analytical modeling and architectural design space exploration of CMPs. The efficiency is achieved by using analytical models instead of simulation, and replacing the exhaustive exploration with an intelligent search strategy. Additionally, these methods incorporate high-level models for physical planning. The related contributions are described in Chapters 3, 4 and 5 of the document.The second objective of this work is to propose a scalable task mapping algorithm onto general-purpose CMPs with power management techniques, for efficient deployment of many-core systems. This contribution is explained in Chapter 6 of this document.Finally, the third objective of this thesis is to address the issues of the on-chip interconnect design and exploration, by developing a model for simultaneous topology customization and deadlock-free routing in Networks-on-Chip. The developed methodology can be applied to various classes of the on-chip systems, ranging from general-purpose chip multiprocessors to application-specific solutions. Chapter 7 describes the proposed model.The presented methods have been thoroughly tested experimentally and the results are described in this dissertation. At the end of the document several possible directions for the future research are proposed

    Bluetooth Mesh under the Microscope: How much ICN is Inside?

    Full text link
    Bluetooth (BT) mesh is a new mode of BT operation for low-energy devices that offers group-based publish-subscribe as a network service with additional caching capabilities. These features resemble concepts of information-centric networking (ICN), and the analogy to ICN has been repeatedly drawn in the BT community. In this paper, we compare BT mesh with ICN both conceptually and in real-world experiments. We contrast both architectures and their design decisions in detail. Experiments are performed on an IoT testbed using NDN/CCNx and BT mesh on constrained RIOT nodes. Our findings indicate significant differences both in concepts and in real-world performance. Supported by new insights, we identify synergies and sketch a design of a BT-ICN that benefits from both worlds

    Multicell Edge Coverage Enhancement Using Mobile UAV-Relay

    Get PDF
    Unmanned aerial vehicle (UAV)-assisted communication is a promising technology in future wireless communication networks. UAVs can not only help offload data traffic from ground base stations (GBSs) but also improve the Quality of Service (QoS) of cell-edge users (CEUs). In this article, we consider the enhancement of cell-edge communications through a mobile relay, i.e., UAV, in multicell networks. During each transmission period, GBSs first send data to the UAV, and then the UAV forwards its received data to CEUs according to a certain association strategy. In order to maximize the sum rate of all CEUs, we jointly optimize the UAV mobility management, including trajectory, velocity, and acceleration, and association strategy of CEUs to the UAV, subject to minimum rate requirements of CEUs, mobility constraints of the UAV, and causal buffer constraints in practice. To address the mixed-integer nonconvex problem, we transform it into two convex subproblems by applying tight bounds and relaxations. An iterative algorithm is proposed to solve the two subproblems in an alternating manner. Numerical results show that the proposed algorithm achieves higher rates of CEUs as compared with the existing benchmark schemes

    Relaying in the Internet of Things (IoT): A Survey

    Get PDF
    The deployment of relays between Internet of Things (IoT) end devices and gateways can improve link quality. In cellular-based IoT, relays have the potential to reduce base station overload. The energy expended in single-hop long-range communication can be reduced if relays listen to transmissions of end devices and forward these observations to gateways. However, incorporating relays into IoT networks faces some challenges. IoT end devices are designed primarily for uplink communication of small-sized observations toward the network; hence, opportunistically using end devices as relays needs a redesign of both the medium access control (MAC) layer protocol of such end devices and possible addition of new communication interfaces. Additionally, the wake-up time of IoT end devices needs to be synchronized with that of the relays. For cellular-based IoT, the possibility of using infrastructure relays exists, and noncellular IoT networks can leverage the presence of mobile devices for relaying, for example, in remote healthcare. However, the latter presents problems of incentivizing relay participation and managing the mobility of relays. Furthermore, although relays can increase the lifetime of IoT networks, deploying relays implies the need for additional batteries to power them. This can erode the energy efficiency gain that relays offer. Therefore, designing relay-assisted IoT networks that provide acceptable trade-offs is key, and this goes beyond adding an extra transmit RF chain to a relay-enabled IoT end device. There has been increasing research interest in IoT relaying, as demonstrated in the available literature. Works that consider these issues are surveyed in this paper to provide insight into the state of the art, provide design insights for network designers and motivate future research directions

    Full-Duplex Wireless for 6G: Progress Brings New Opportunities and Challenges

    Full text link
    The use of in-band full-duplex (FD) enables nodes to simultaneously transmit and receive on the same frequency band, which challenges the traditional assumption in wireless network design. The full-duplex capability enhances spectral efficiency and decreases latency, which are two key drivers pushing the performance expectations of next-generation mobile networks. In less than ten years, in-band FD has advanced from being demonstrated in research labs to being implemented in standards and products, presenting new opportunities to utilize its foundational concepts. Some of the most significant opportunities include using FD to enable wireless networks to sense the physical environment, integrate sensing and communication applications, develop integrated access and backhaul solutions, and work with smart signal propagation environments powered by reconfigurable intelligent surfaces. However, these new opportunities also come with new challenges for large-scale commercial deployment of FD technology, such as managing self-interference, combating cross-link interference in multi-cell networks, and coexistence of dynamic time division duplex, subband FD and FD networks.Comment: 21 pages, 15 figures, accepted to an IEEE Journa
    • …
    corecore