30,562 research outputs found
Design and analysis for TCP-friendly window-based congestion control
The current congestion control mechanisms for the Internet date back to the early 1980’s and were
primarily designed to stop congestion collapse with the typical traffic of that era. In recent years the
amount of traffic generated by real-time multimedia applications has substantially increased, and the
existing congestion control often does not opt to those types of applications. By this reason, the Internet
can be fall into a uncontrolled system such that the overall throughput oscillates too much by a single
flow which in turn can lead a poor application performance. Apart from the network level concerns,
those types of applications greatly care of end-to-end delay and smoother throughput in which the
conventional congestion control schemes do not suit. In this research, we will investigate improving the
state of congestion control for real-time and interactive multimedia applications. The focus of this work
is to provide fairness among applications using different types of congestion control mechanisms to get
a better link utilization, and to achieve smoother and predictable throughput with suitable end-to-end
packet delay
Fundamentals of Large Sensor Networks: Connectivity, Capacity, Clocks and Computation
Sensor networks potentially feature large numbers of nodes that can sense
their environment over time, communicate with each other over a wireless
network, and process information. They differ from data networks in that the
network as a whole may be designed for a specific application. We study the
theoretical foundations of such large scale sensor networks, addressing four
fundamental issues- connectivity, capacity, clocks and function computation.
To begin with, a sensor network must be connected so that information can
indeed be exchanged between nodes. The connectivity graph of an ad-hoc network
is modeled as a random graph and the critical range for asymptotic connectivity
is determined, as well as the critical number of neighbors that a node needs to
connect to. Next, given connectivity, we address the issue of how much data can
be transported over the sensor network. We present fundamental bounds on
capacity under several models, as well as architectural implications for how
wireless communication should be organized.
Temporal information is important both for the applications of sensor
networks as well as their operation.We present fundamental bounds on the
synchronizability of clocks in networks, and also present and analyze
algorithms for clock synchronization. Finally we turn to the issue of gathering
relevant information, that sensor networks are designed to do. One needs to
study optimal strategies for in-network aggregation of data, in order to
reliably compute a composite function of sensor measurements, as well as the
complexity of doing so. We address the issue of how such computation can be
performed efficiently in a sensor network and the algorithms for doing so, for
some classes of functions.Comment: 10 pages, 3 figures, Submitted to the Proceedings of the IEE
State-recycling and time-resolved imaging in topological photonic lattices
Photonic lattices - arrays of optical waveguides - are powerful platforms for
simulating a range of phenomena, including topological phases. While probing
dynamics is possible in these systems, by reinterpreting the propagation
direction as "time," accessing long timescales constitutes a severe
experimental challenge. Here, we overcome this limitation by placing the
photonic lattice in a cavity, which allows the optical state to evolve through
the lattice multiple times. The accompanying detection method, which exploits a
multi-pixel single-photon detector array, offers quasi-real time-resolved
measurements after each round trip. We apply the state-recycling scheme to
intriguing photonic lattices emulating Dirac fermions and Floquet topological
phases. In this new platform, we also realise a synthetic pulsed electric
field, which can be used to drive transport within photonic lattices. This work
opens a new route towards the detection of long timescale effects in engineered
photonic lattices and the realization of hybrid analogue-digital simulators.Comment: Comments are welcom
The Xpress Transfer Protocol (XTP): A tutorial (expanded version)
The Xpress Transfer Protocol (XTP) is a reliable, real-time, light weight transfer layer protocol. Current transport layer protocols such as DoD's Transmission Control Protocol (TCP) and ISO's Transport Protocol (TP) were not designed for the next generation of high speed, interconnected reliable networks such as fiber distributed data interface (FDDI) and the gigabit/second wide area networks. Unlike all previous transport layer protocols, XTP is being designed to be implemented in hardware as a VLSI chip set. By streamlining the protocol, combining the transport and network layers and utilizing the increased speed and parallelization possible with a VLSI implementation, XTP will be able to provide the end-to-end data transmission rates demanded in high speed networks without compromising reliability and functionality. This paper describes the operation of the XTP protocol and in particular, its error, flow and rate control; inter-networking addressing mechanisms; and multicast support features, as defined in the XTP Protocol Definition Revision 3.4
An Improved Link Model for Window Flow Control and Its Application to FAST TCP
This paper presents a link model which captures the queue dynamics in response to a change in a transmission control protocol (TCP) source's congestion window. By considering both self-clocking and the link integrator effect, the model generalizes existing models and is shown to be more accurate by both open loop and closed loop packet level simulations. It reduces to the known static link model when flows' round trip delays are identical, and approximates the standard integrator link model when there is significant cross traffic. We apply this model to the stability analysis of fast active queue management scalable TCP (FAST TCP) including its filter dynamics. Under this model, the FAST control law is linearly stable for a single bottleneck link with an arbitrary distribution of round trip delays. This result resolves the notable discrepancy between empirical observations and previous theoretical predictions. The analysis highlights the critical role of self-clocking in TCP stability, and the proof technique is new and less conservative than existing ones
Efficient Logging in Non-Volatile Memory by Exploiting Coherency Protocols
Non-volatile memory (NVM) technologies such as PCM, ReRAM and STT-RAM allow
processors to directly write values to persistent storage at speeds that are
significantly faster than previous durable media such as hard drives or SSDs.
Many applications of NVM are constructed on a logging subsystem, which enables
operations to appear to execute atomically and facilitates recovery from
failures. Writes to NVM, however, pass through a processor's memory system,
which can delay and reorder them and can impair the correctness and cost of
logging algorithms.
Reordering arises because of out-of-order execution in a CPU and the
inter-processor cache coherence protocol. By carefully considering the
properties of these reorderings, this paper develops a logging protocol that
requires only one round trip to non-volatile memory while avoiding expensive
computations. We show how to extend the logging protocol to building a
persistent set (hash map) that also requires only a single round trip to
non-volatile memory for insertion, updating, or deletion
- …