3,349 research outputs found
TCP-Aware Backpressure Routing and Scheduling
In this work, we explore the performance of backpressure routing and
scheduling for TCP flows over wireless networks. TCP and backpressure are not
compatible due to a mismatch between the congestion control mechanism of TCP
and the queue size based routing and scheduling of the backpressure framework.
We propose a TCP-aware backpressure routing and scheduling that takes into
account the behavior of TCP flows. TCP-aware backpressure (i) provides
throughput optimality guarantees in the Lyapunov optimization framework, (ii)
gracefully combines TCP and backpressure without making any changes to the TCP
protocol, (iii) improves the throughput of TCP flows significantly, and (iv)
provides fairness across competing TCP flows
Mitigating interconnect and end host congestion in modern networks
One of the most critical building blocks of the Internet is the mechanism to mitigate network congestion. While existing congestion control approaches have served their purpose well in the last decades, the last few years saw a significant increase in new applications and user demand, stressing the network infrastructure to the extent that new ways of handling congestion are required. This dissertation identifies the congestion problems caused by the increased scale of the network usage, both in inter-AS connects and on end hosts in data centers, and presents abstractions and frameworks that allow for improved solutions to mitigate congestion. To mitigate inter-AS congestion, we develop Unison, a framework that allows an ISP to jointly optimize its intra-domain routes and inter-domain routes, in collaboration with content providers. The basic idea is to provide the ISP operator and the neighbors of the ISP with an abstraction of the ISP network in the form of a virtual switch (vSwitch). Unison allows the ISP to provide hints to its neighbors, suggesting alternative routes that can improve their performance. We investigate how the vSwitch abstraction can be used to maximize the throughput of the ISP. To mitigate end-host congestion in data center networks, we develop a backpressure mechanism for queuing architecture in congested end hosts to cope with tens of thousands of flows. We show that current end-host mechanisms can lead to high CPU utilization, high tail latency, and low throughput in cases of congestion of egress traffic. We introduce the design, implementation, and evaluation of zero-drop networking (zD) stack, a new architecture for handling congestion of scheduled buffers. Besides queue overflow, another cause of congestion is CPU resource exhaustion. The CPU cost of processing packets in networking stacks, however, has not been fully investigated in the literature. Much of the focus of the community has been on scaling servers in terms of aggregate traffic intensity, but bottlenecks caused by the increasing number of concurrent flows have received little attention. We conduct a comprehensive analysis on the CPU cost of processing packets and identify the root cause that leads to high CPU overhead and degraded performance in terms of throughput and RTT. Our work highlights considerations beyond packets per second for the design of future stacks that scale to millions of flows.Ph.D
End-to-End Simulation of 5G mmWave Networks
Due to its potential for multi-gigabit and low latency wireless links,
millimeter wave (mmWave) technology is expected to play a central role in 5th
generation cellular systems. While there has been considerable progress in
understanding the mmWave physical layer, innovations will be required at all
layers of the protocol stack, in both the access and the core network.
Discrete-event network simulation is essential for end-to-end, cross-layer
research and development. This paper provides a tutorial on a recently
developed full-stack mmWave module integrated into the widely used open-source
ns--3 simulator. The module includes a number of detailed statistical channel
models as well as the ability to incorporate real measurements or ray-tracing
data. The Physical (PHY) and Medium Access Control (MAC) layers are modular and
highly customizable, making it easy to integrate algorithms or compare
Orthogonal Frequency Division Multiplexing (OFDM) numerologies, for example.
The module is interfaced with the core network of the ns--3 Long Term Evolution
(LTE) module for full-stack simulations of end-to-end connectivity, and
advanced architectural features, such as dual-connectivity, are also available.
To facilitate the understanding of the module, and verify its correct
functioning, we provide several examples that show the performance of the
custom mmWave stack as well as custom congestion control algorithms designed
specifically for efficient utilization of the mmWave channel.Comment: 25 pages, 16 figures, submitted to IEEE Communications Surveys and
Tutorials (revised Jan. 2018
Enabling RAN Slicing Through Carrier Aggregation in mmWave Cellular Networks
The ever increasing number of connected devices and of new and heterogeneous
mobile use cases implies that 5G cellular systems will face demanding technical
challenges. For example, Ultra-Reliable Low-Latency Communication (URLLC) and
enhanced Mobile Broadband (eMBB) scenarios present orthogonal Quality of
Service (QoS) requirements that 5G aims to satisfy with a unified Radio Access
Network (RAN) design. Network slicing and mmWave communications have been
identified as possible enablers for 5G. They provide, respectively, the
necessary scalability and flexibility to adapt the network to each specific use
case environment, and low latency and multi-gigabit-per-second wireless links,
which tap into a vast, currently unused portion of the spectrum. The
optimization and integration of these technologies is still an open research
challenge, which requires innovations at different layers of the protocol
stack. This paper proposes to combine them in a RAN slicing framework for
mmWaves, based on carrier aggregation. Notably, we introduce MilliSlice, a
cross-carrier scheduling policy that exploits the diversity of the carriers and
maximizes their utilization, thus simultaneously guaranteeing high throughput
for the eMBB slices and low latency and high reliability for the URLLC flows.Comment: 8 pages, 8 figures. Proc. of the 18th Mediterranean Communication and
Computer Networking Conference (MedComNet 2020), Arona, Italy, 202
Intelligent Management and Efficient Operation of Big Data
This chapter details how Big Data can be used and implemented in networking
and computing infrastructures. Specifically, it addresses three main aspects:
the timely extraction of relevant knowledge from heterogeneous, and very often
unstructured large data sources, the enhancement on the performance of
processing and networking (cloud) infrastructures that are the most important
foundational pillars of Big Data applications or services, and novel ways to
efficiently manage network infrastructures with high-level composed policies
for supporting the transmission of large amounts of data with distinct
requisites (video vs. non-video). A case study involving an intelligent
management solution to route data traffic with diverse requirements in a wide
area Internet Exchange Point is presented, discussed in the context of Big
Data, and evaluated.Comment: In book Handbook of Research on Trends and Future Directions in Big
Data and Web Intelligence, IGI Global, 201
Design techniques for low-power systems
Portable products are being used increasingly. Because these systems are battery powered, reducing power consumption is vital. In this report we give the properties of low-power design and techniques to exploit them on the architecture of the system. We focus on: minimizing capacitance, avoiding unnecessary and wasteful activity, and reducing voltage and frequency. We review energy reduction techniques in the architecture and design of a hand-held computer and the wireless communication system including error control, system decomposition, communication and MAC protocols, and low-power short range networks
Recommended from our members
Computing infrastructure issues in distributed communications systems : a survey of operating system transport system architectures
The performance of distributed applications (such as file transfer, remote login, tele-conferencing, full-motion video, and scientific visualization) is influenced by several factors that interact in complex ways. In particular, application performance is significantly affected both by communication infrastructure factors and computing infrastructure factors. Several communication infrastructure factors include channel speed, bit-error rate, and congestion at intermediate switching nodes. Computing infrastructure factors include (among other things) both protocol processing activities (such as connection management, flow control, error detection, and retransmission) and general operating system factors (such as memory latency, CPU speed, interrupt and context switching overhead, process architecture, and message buffering). Due to a several orders of magnitude increase in network channel speed and an increase in application diversity, performance bottlenecks are shifting from the network factors to the transport system factors.This paper defines an abstraction called an "Operating System Transport System Architecture" (OSTSA) that is used to classify the major components and services in the computing infrastructure. End-to-end network protocols such as TCP, TP4, VMTP, XTP, and Delta-t typically run on general-purpose computers, where they utilize various operating system resources such as processors, virtual memory, and network controllers. The OSTSA provides services that integrate these resources to support distributed applications running on local and wide area networks.A taxonomy is presented to evaluate OSTSAs in terms of their support for protocol processing activities. We use this taxonomy to compare and contrast five general-purpose commercial and experimental operating systems including System V UNIX, BSD UNIX, the x-kernel, Choices, and Xinu
- âŠ