58 research outputs found

    End-to-End Simulation of 5G mmWave Networks

    Full text link
    Due to its potential for multi-gigabit and low latency wireless links, millimeter wave (mmWave) technology is expected to play a central role in 5th generation cellular systems. While there has been considerable progress in understanding the mmWave physical layer, innovations will be required at all layers of the protocol stack, in both the access and the core network. Discrete-event network simulation is essential for end-to-end, cross-layer research and development. This paper provides a tutorial on a recently developed full-stack mmWave module integrated into the widely used open-source ns--3 simulator. The module includes a number of detailed statistical channel models as well as the ability to incorporate real measurements or ray-tracing data. The Physical (PHY) and Medium Access Control (MAC) layers are modular and highly customizable, making it easy to integrate algorithms or compare Orthogonal Frequency Division Multiplexing (OFDM) numerologies, for example. The module is interfaced with the core network of the ns--3 Long Term Evolution (LTE) module for full-stack simulations of end-to-end connectivity, and advanced architectural features, such as dual-connectivity, are also available. To facilitate the understanding of the module, and verify its correct functioning, we provide several examples that show the performance of the custom mmWave stack as well as custom congestion control algorithms designed specifically for efficient utilization of the mmWave channel.Comment: 25 pages, 16 figures, submitted to IEEE Communications Surveys and Tutorials (revised Jan. 2018

    Towards Real-time Wireless Sensor Networks

    Get PDF
    Wireless sensor networks are poised to change the way computer systems interact with the physical world. We plan on entrusting sensor systems to collect medical data from patients, monitor the safety of our infrastructure, and control manufacturing processes in our factories. To date, the focus of the sensor network community has been on developing best-effort services. This approach is insufficient for many applications since it does not enable developers to determine if a system\u27s requirements in terms of communication latency, bandwidth utilization, reliability, or energy consumption are met. The focus of this thesis is to develop real-time network support for such critical applications. The first part of the thesis focuses on developing a power management solution for the radio subsystem which addresses both the problem of idle-listening and power control. In contrast to traditional power management solutions which focus solely on reducing energy consumption, the distinguishing feature of our approach is that it achieves both energy efficiency and real-time communication. A solution to the idle-listening problem is proposed in Energy Efficient Sleep Scheduling based on Application Semantics: ESSAT). The novelty of ESSAT lies in that it takes advantage of the common features of data collection applications to determine when to turn on and off a node\u27s radio without affecting real-time performance. A solution to the power control problem is proposed in Real-time Power Aware-Routing: RPAR). RPAR tunes the transmission power for each packet based on its deadline such that energy is saved without missing packet deadlines. The main theoretical contribution of this thesis is the development of novel transmission scheduling techniques optimized for data collection applications. This work bridges the gap between wireless sensor networks and real-time scheduling theory, which have traditionally been applied to processor scheduling. The proposed approach has significant advantages over existing design methodologies:: 1) it provides predictable performance allowing for the performance of a system to be estimated upon its deployment,: 2) it is possible to detect and handle overload conditions through simple rate control mechanisms, and: 3) it easily accommodates workload changes. I developed this framework under a realistic interference model by coordinating the activities at the MAC, link, and routing layers. The last component of this thesis focuses on the development of a real-time patient monitoring system for general hospital units. The system is designed to facilitate the detection of clinical deterioration, which is a key factor in saving lives and reducing healthcare costs. Since patients in general hospital wards are often ambulatory, a key challenge is to achieve high reliability even in the presence of mobility. To support patient mobility, I developed the Dynamic Relay Association Protocol -- a simple and effective mechanism for dynamically discovering the right relays for forwarding patient data -- and a Radio Mapping Tool -- a practical tool for ensuring network coverage in 802.15.4 networks. We show that it is feasible to use low-power and low-cost wireless sensor networks for clinical monitoring through an in-depth clinical study. The study was performed in a step-down cardiac care unit at Barnes-Jewish Hospital. This is the first long-term study of such a patient monitoring system

    Resource allocation in WiMAX mesh networks

    Get PDF
    ix, 77 leaves : ill. ; 29 cmThe IEEE 802.16 standard popularly known as WiMAX is at the forefront of the technological drive. Achieving high system throughput in these networks is challenging due to interference which limits concurrent transmissions. In this thesis, we study routing and link scheduling inWiMAX mesh networks. We present simple joint routing and link scheduling algorithms that have outperformed most of the existing proposals in our experiments. Our session based routing and links scheduling produced results approximately 90% of a trivial lower bound. We also study the problem of quality of service (QoS) provisioning in WiMAX mesh networks. QoS has become an attractive area of study driven by the increasing demand for multimedia content delivered wirelessly. To accommodate the different applications, the IEEE 802.16 standard defines four classes of service. In this dissertation, we propose a comprehensive scheme consisting of routing, link scheduling, call admission control (CAC) and channel assignment that considers all classes of service. Much of the work in the literature considers each of these problems in isolation. Our routing schemes use a metric that combines interference and traffic load to compute routes for requests while our link scheduling ensures that the QoS requirements of admitted requests are strictly met. Results from our simulation indicate that our routing and link scheduling schemes significantly improve network performance when the network is congested

    Sharing GPUs for Real-Time Autonomous-Driving Systems

    Get PDF
    Autonomous vehicles at mass-market scales are on the horizon. Cameras are the least expensive among common sensor types and can preserve features such as color and texture that other sensors cannot. Therefore, realizing full autonomy in vehicles at a reasonable cost is expected to entail computer-vision techniques. These computer-vision applications require massive parallelism provided by the underlying shared accelerators, such as graphics processing units, or GPUs, to function “in real time.” However, when computer-vision researchers and GPU vendors refer to “real time,” they usually mean “real fast”; in contrast, certifiable automotive systems must be “real time” in the sense of being predictable. This dissertation addresses the challenging problem of how GPUs can be shared predictably and efficiently for real-time autonomous-driving systems. We tackle this challenge in four steps. First, we investigate NVIDIA GPUs with respect to scheduling, synchronization, and execution. We conduct an extensive set of experiments to infer NVIDIA GPU scheduling rules, which are unfortunately undisclosed by NVIDIA and are beyond access owing to their closed-source software stack. We also expose a list of pitfalls pertaining to CPU-GPU synchronization that can result in unbounded response times of GPU-using applications. Lastly, we examine a fundamental trade-off for designing real-time tasks under different execution options. Overall, our investigation provides an essential understanding of NVIDIA GPUs, allowing us to further model and analyze GPU tasks. Second, we develop a new model and conduct schedulability analysis for GPU tasks. We extend the well-studied sporadic task model with additional parameters that characterize the parallel execution of GPU tasks. We show that NVIDIA scheduling rules are subject to fundamental capacity loss, which implies a necessary total utilization bound. We derive response-time bounds for GPU task systems that satisfy our schedulability conditions. Third, we address an industrial challenge of supplying the throughput performance of computer-vision frameworks to support adequate coverage and redundancy offered by an array of cameras. We re-think the design of convolution neural network (CNN) software to better utilize hardware resources and achieve increased throughput (number of simultaneous camera streams) without any appreciable increase in per-frame latency (camera to CNN output) or reduction of per-stream accuracy. Fourth, we apply our analysis to a finer-grained graph scheduling of a computer-vision standard, OpenVX, which explicitly targets embedded and real-time systems. We evaluate both the analytical and empirical real-time performance of our approach.Doctor of Philosoph

    Intelligence in 5G networks

    Get PDF
    Over the past decade, Artificial Intelligence (AI) has become an important part of our daily lives; however, its application to communication networks has been partial and unsystematic, with uncoordinated efforts that often conflict with each other. Providing a framework to integrate the existing studies and to actually build an intelligent network is a top research priority. In fact, one of the objectives of 5G is to manage all communications under a single overarching paradigm, and the staggering complexity of this task is beyond the scope of human-designed algorithms and control systems. This thesis presents an overview of all the necessary components to integrate intelligence in this complex environment, with a user-centric perspective: network optimization should always have the end goal of improving the experience of the user. Each step is described with the aid of one or more case studies, involving various network functions and elements. Starting from perception and prediction of the surrounding environment, the first core requirements of an intelligent system, this work gradually builds its way up to showing examples of fully autonomous network agents which learn from experience without any human intervention or pre-defined behavior, discussing the possible application of each aspect of intelligence in future networks

    Medium Access Control and Routing Protocols Design for 5G

    Get PDF
    In future wireless systems, such as 5G and beyond, the current dominating human-centric communication systems will be complemented by a tremendous increase in the number of smart devices, equipped with radio devices, possibly sensors, and uniquely addressable. This will result in explosion of wireless traffic volume, and consequently exponential growth in demand of radio spectrum. There are different engineering techniques for resolving the cost and scarcity of radio spectrum such as coexistence of diverse devices on the same pool of radio resources, spectrum aggregations, adoption of mmWave bands with huge spectrum, etc. The aim of this thesis is to investigate Medium Access Control (MAC) and routing protocols for 5G and beyond radio networks. Two scenarios are addressed: heterogeneous scenario where scheduled and uncoordinated users coexist, and a scenario where drones are used for monitoring a given area. In the heterogeneous scenario scheduled users are synchronised with the Base Station (BS) and rely on centralised resource scheduler for assignment of time slots, while the uncoordinated users are asynchronous with each other and the BS and rely unslotted Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) for channel access. First, we address a single-hop network with advanced scheduling algorithm design and packet length adaptation schemes design. Second, we address a multi-hop network with novel routing protocol for enhancing performance of the scheduled users in terms of throughput, and coexistence of all network users. In the drone-based scenario, new routing protocols are designed to address the problems of Wireless Mesh Networks with monitoring drones. In particular, a novel optimised Hybrid Wireless Mesh Protocol (O-HWMP) for a quick and efficient discovery of paths is designed, and a capacity achieving routing and scheduling algorithm, called backpressure, investigated. To improve on the long-end-to-end delays of classical backpressure, a modified backpressure algorithm is proposed and evaluated

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This book is Open Access under a CC BY licence. The LNCS 11427 and 11428 proceedings set constitutes the proceedings of the 25th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2019, which took place in Prague, Czech Republic, in April 2019, held as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2019. The total of 42 full and 8 short tool demo papers presented in these volumes was carefully reviewed and selected from 164 submissions. The papers are organized in topical sections as follows: Part I: SAT and SMT, SAT solving and theorem proving; verification and analysis; model checking; tool demo; and machine learning. Part II: concurrent and distributed systems; monitoring and runtime verification; hybrid and stochastic systems; synthesis; symbolic verification; and safety and fault-tolerant systems
    corecore