52 research outputs found
SLBN: A Scalable Max-min Fair Algorithm for Rate-Based Explicit Congestion Control
The growth of the Internet has increased the need for scalable congestion control mechanisms in high speed networks. In this context, we propose a rate-based explicit congestion control mechanism with which the sources are provided with the rate at which they can transmit. These rates are computed with a distributed max-min fair algorithm, SLBN. The novelty of SLBN is that it combines two interesting features not simultaneously present in existing proposals: scalability and fast convergence to the max-min fair rates, even under high session churn. SLBN is scalable because routers only maintain a constant amount of state information (only three integer variables per link) and only incur a constant amount of computation per protocol packet, independently of the number of sessions that cross the router. Additionally, SLBN does not require processing any data packet, and it converges independently of sessions' RTT. Finally, by design, the protocol is conservative when assigning rates, even in the presence of high churn, which helps preventing link overshoots in transient periods. We claim that, with all these features, our mechanism is a good candidate to be used in real deployments
Architectural Techniques to Enable Reliable and Scalable Memory Systems
High capacity and scalable memory systems play a vital role in enabling our
desktops, smartphones, and pervasive technologies like Internet of Things
(IoT). Unfortunately, memory systems are becoming increasingly prone to faults.
This is because we rely on technology scaling to improve memory density, and at
small feature sizes, memory cells tend to break easily. Today, memory
reliability is seen as the key impediment towards using high-density devices,
adopting new technologies, and even building the next Exascale supercomputer.
To ensure even a bare-minimum level of reliability, present-day solutions tend
to have high performance, power and area overheads. Ideally, we would like
memory systems to remain robust, scalable, and implementable while keeping the
overheads to a minimum. This dissertation describes how simple cross-layer
architectural techniques can provide orders of magnitude higher reliability and
enable seamless scalability for memory systems while incurring negligible
overheads.Comment: PhD thesis, Georgia Institute of Technology (May 2017
SAR processing using PVM
Bibliography: pages 120-121.This thesis explores various methods of using PVM (Parallel Virtual Machine) to improve the speed of processing a SAR (Synthetic Aperture Radar) image. A network of heterogeneous machines were set up as the basis of the parallel virtual machine. SAR processing software was written for testing the PVM. The software performed simplified range and azimuth compression on simulated SAR images of a point target The theory and results were examined as part of the thesis. Complications such as range curvature, range migration and range dependent focusing were not addressed
Retail payment systems in the OIC Member Countries
Retail payment systems have been applied to one of the oldest problems of civilisations: how payment can be made for goods. In this report we address these systems primarily from the perspective of those relatively new technologies, businesses and processes that challenge cash-based systems. Our purpose is to explain these new technologies and their significance for OIC Member States and to offer recommendations on how to learn from best practices that can enhance the economies of these countries
Cyberidentities
This innovative study explores diverse aspects of Canadian and European identity on the information highway and reaches beyond technical issues to confront and explore communication, culture and the culture of communication
- …