48 research outputs found

    The Four-C Framework for High Capacity Ultra-Low Latency in 5G Networks: A Review

    Get PDF
    Network latency will be a critical performance metric for the Fifth Generation (5G) networks expected to be fully rolled out in 2020 through the IMT-2020 project. The multi-user multiple-input multiple-output (MU-MIMO) technology is a key enabler for the 5G massive connectivity criterion, especially from the massive densification perspective. Naturally, it appears that 5G MU-MIMO will face a daunting task to achieve an end-to-end 1 ms ultra-low latency budget if traditional network set-ups criteria are strictly adhered to. Moreover, 5G latency will have added dimensions of scalability and flexibility compared to prior existing deployed technologies. The scalability dimension caters for meeting rapid demand as new applications evolve. While flexibility complements the scalability dimension by investigating novel non-stacked protocol architecture. The goal of this review paper is to deploy ultra-low latency reduction framework for 5G communications considering flexibility and scalability. The Four (4) C framework consisting of cost, complexity, cross-layer and computing is hereby analyzed and discussed. The Four (4) C framework discusses several emerging new technologies of software defined network (SDN), network function virtualization (NFV) and fog networking. This review paper will contribute significantly towards the future implementation of flexible and high capacity ultra-low latency 5G communications

    Reliable cost-optimal deployment of wireless sensor networks

    Get PDF
    Wireless Sensor Networks (WSNs) technology is currently considered one of the key technologies for realizing the Internet of Things (IoT). Many of the important WSNs applications are critical in nature such that the failure of the WSN to carry out its required tasks can have serious detrimental effects. Consequently, guaranteeing that the WSN functions satisfactorily during its intended mission time, i.e. the WSN is reliable, is one of the fundamental requirements of the network deployment strategy. Achieving this requirement at a minimum deployment cost is particularly important for critical applications in which deployed SNs are equipped with expensive hardware. However, WSN reliability, defined in the traditional sense, especially in conjunction with minimizing the deployment cost, has not been considered as a deployment requirement in existing WSN deployment algorithms to the best of our knowledge. Addressing this major limitation is the central focus of this dissertation. We define the reliable cost-optimal WSN deployment as the one that has minimum deployment cost with a reliability level that meets or exceeds a minimum level specified by the targeted application. We coin the problem of finding such deployments, for a given set of application-specific parameters, the Minimum-Cost Reliability-Constrained Sensor Node Deployment Problem (MCRC-SDP). To accomplish the aim of the dissertation, we propose a novel WSN reliability metric which adopts a more accurate SN model than the model used in the existing metrics. The proposed reliability metric is used to formulate the MCRC-SDP as a constrained combinatorial optimization problem which we prove to be NP-Complete. Two heuristic WSN deployment optimization algorithms are then developed to find high quality solutions for the MCRC-SDP. Finally, we investigate the practical realization of the techniques that we developed as solutions of the MCRC-SDP. For this purpose, we discuss why existing WSN Topology Control Protocols (TCPs) are not suitable for managing such reliable cost-optimal deployments. Accordingly, we propose a practical TCP that is suitable for managing the sleep/active cycles of the redundant SNs in such deployments. Experimental results suggest that the proposed TCP\u27s overhead and network Time To Repair (TTR) are relatively low which demonstrates the applicability of our proposed deployment solution in practice

    Geomorphometry 2020. Conference Proceedings

    Get PDF
    Geomorphometry is the science of quantitative land surface analysis. It gathers various mathematical, statistical and image processing techniques to quantify morphological, hydrological, ecological and other aspects of a land surface. Common synonyms for geomorphometry are geomorphological analysis, terrain morphometry or terrain analysis and land surface analysis. The typical input to geomorphometric analysis is a square-grid representation of the land surface: a digital elevation (or land surface) model. The first Geomorphometry conference dates back to 2009 and it took place in Zürich, Switzerland. Subsequent events were in Redlands (California), Nánjīng (China), Poznan (Poland) and Boulder (Colorado), at about two years intervals. The International Society for Geomorphometry (ISG) and the Organizing Committee scheduled the sixth Geomorphometry conference in Perugia, Italy, June 2020. Worldwide safety measures dictated the event could not be held in presence, and we excluded the possibility to hold the conference remotely. Thus, we postponed the event by one year - it will be organized in June 2021, in Perugia, hosted by the Research Institute for Geo-Hydrological Protection of the Italian National Research Council (CNR IRPI) and the Department of Physics and Geology of the University of Perugia. One of the reasons why we postponed the conference, instead of canceling, was the encouraging number of submitted abstracts. Abstracts are actually short papers consisting of four pages, including figures and references, and they were peer-reviewed by the Scientific Committee of the conference. This book is a collection of the contributions revised by the authors after peer review. We grouped them in seven classes, as follows: • Data and methods (13 abstracts) • Geoheritage (6 abstracts) • Glacial processes (4 abstracts) • LIDAR and high resolution data (8 abstracts) • Morphotectonics (8 abstracts) • Natural hazards (12 abstracts) • Soil erosion and fluvial processes (16 abstracts) The 67 abstracts represent 80% of the initial contributions. The remaining ones were either not accepted after peer review or withdrawn by their Authors. Most of the contributions contain original material, and an extended version of a subset of them will be included in a special issue of a regular journal publication

    Bandwidth-aware distributed ad-hoc grids in deployed wireless sensor networks

    Get PDF
    Nowadays, cost effective sensor networks can be deployed as a result of a plethora of recent engineering advances in wireless technology, storage miniaturisation, consolidated microprocessor design, and sensing technologies. Whilst sensor systems are becoming relatively cheap to deploy, two issues arise in their typical realisations: (i) the types of low-cost sensors often employed are capable of limited resolution and tend to produce noisy data; (ii) network bandwidths are relatively low and the energetic costs of using the radio to communicate are relatively high. To reduce the transmission of unnecessary data, there is a strong argument for performing local computation. However, this can require greater computational capacity than is available on a single low-power processor. Traditionally, such a problem has been addressed by using load balancing: fragmenting processes into tasks and distributing them amongst the least loaded nodes. However, the act of distributing tasks, and any subsequent communication between them, imposes a geographically defined load on the network. Because of the shared broadcast nature of the radio channels and MAC layers in common use, any communication within an area will be slowed by additional traffic, delaying the computation and reporting that relied on the availability of the network. In this dissertation, we explore the tradeoff between the distribution of computation, needed to enhance the computational abilities of networks of resource-constrained nodes, and the creation of network traffic that results from that distribution. We devise an application-independent distribution paradigm and a set of load distribution algorithms to allow computationally intensive applications to be collaboratively computed on resource-constrained devices. Then, we empirically investigate the effects of network traffic information on the distribution performance. We thus devise bandwidth-aware task offload mechanisms that, combining both nodes computational capabilities and local network conditions, investigate the impacts of making informed offload decisions on system performance. The highly deployment-specific nature of radio communication means that simulations that are capable of producing validated, high-quality, results are extremely hard to construct. Consequently, to produce meaningful results, our experiments have used empirical analysis based on a network of motes located at UCL, running a variety of I/O-bound, CPU-bound and mixed tasks. Using this setup, we have established that even relatively simple load sharing algorithms can improve performance over a range of different artificially generated scenarios, with more or less timely contextual information. In addition, we have taken a realistic application, based on location estimation, and implemented that across the same network with results that support the conclusions drawn from the artificially generated traffic
    corecore