6,831 research outputs found

    SSthreshless Start: A Sender-Side TCP Intelligence for Long Fat Network

    Full text link
    Measurement shows that 85% of TCP flows in the internet are short-lived flows that stay most of their operation in the TCP startup phase. However, many previous studies indicate that the traditional TCP Slow Start algorithm does not perform well, especially in long fat networks. Two obvious problems are known to impact the Slow Start performance, which are the blind initial setting of the Slow Start threshold and the aggressive increase of the probing rate during the startup phase regardless of the buffer sizes along the path. Current efforts focusing on tuning the Slow Start threshold and/or probing rate during the startup phase have not been considered very effective, which has prompted an investigation with a different approach. In this paper, we present a novel TCP startup method, called threshold-less slow start or SSthreshless Start, which does not need the Slow Start threshold to operate. Instead, SSthreshless Start uses the backlog status at bottleneck buffer to adaptively adjust probing rate which allows better seizing of the available bandwidth. Comparing to the traditional and other major modified startup methods, our simulation results show that SSthreshless Start achieves significant performance improvement during the startup phase. Moreover, SSthreshless Start scales well with a wide range of buffer size, propagation delay and network bandwidth. Besides, it shows excellent friendliness when operating simultaneously with the currently popular TCP NewReno connections.Comment: 25 pages, 10 figures, 7 table

    The Quest for Bandwidth Estimation Techniques for large-scale Distributed Systems

    Get PDF
    In recent years the research community has developed many techniques to estimate the end-to-end available bandwidth of an Internet path. This important metric has been proposed for use in several distributed systems and, more recently, has even been considered to improve the congestion control mechanism of TCP. Thus, it has been suggested that some existing estimation techniques could be used for this purpose. However, existing tools were not designed for large-scale deployments and were mostly validated in controlled settings, considering only one measurement running at a time. In this paper, we argue that current tools, while offering good estimates when used alone, might not work in large-scale systems where several estimations severely interfere with each other. We analyze the properties of the measurement paradigms employed today and discuss their functioning, study their overhead and analyze their interference. Our testbed results show that current techniques are insufficient as they are. Finally, we will discuss and propose some principles that should be taken into account for including available bandwidth measurements in large-scale distributed systems. 1

    Review of high-contrast imaging systems for current and future ground- and space-based telescopes I. Coronagraph design methods and optical performance metrics

    Full text link
    The Optimal Optical Coronagraph (OOC) Workshop at the Lorentz Center in September 2017 in Leiden, the Netherlands gathered a diverse group of 25 researchers working on exoplanet instrumentation to stimulate the emergence and sharing of new ideas. In this first installment of a series of three papers summarizing the outcomes of the OOC workshop, we present an overview of design methods and optical performance metrics developed for coronagraph instruments. The design and optimization of coronagraphs for future telescopes has progressed rapidly over the past several years in the context of space mission studies for Exo-C, WFIRST, HabEx, and LUVOIR as well as ground-based telescopes. Design tools have been developed at several institutions to optimize a variety of coronagraph mask types. We aim to give a broad overview of the approaches used, examples of their utility, and provide the optimization tools to the community. Though it is clear that the basic function of coronagraphs is to suppress starlight while maintaining light from off-axis sources, our community lacks a general set of standard performance metrics that apply to both detecting and characterizing exoplanets. The attendees of the OOC workshop agreed that it would benefit our community to clearly define quantities for comparing the performance of coronagraph designs and systems. Therefore, we also present a set of metrics that may be applied to theoretical designs, testbeds, and deployed instruments. We show how these quantities may be used to easily relate the basic properties of the optical instrument to the detection significance of the given point source in the presence of realistic noise.Comment: To appear in Proceedings of the SPIE, vol. 1069

    Overcoming Bandwidth Fluctuations in Hybrid Networks with QoS-Aware Adaptive Routing

    Get PDF
    With an escalating reliance on sensor-driven scientific endeavors in challenging terrains, the significance of robust hybrid networks, formed by a combination of wireless and wired links, is more noticeable than ever. These networks serve as essential channels for data streaming to centralized data centers, but their efficiency is often degraded by bandwidth fluctuations and network congestion. Especially in bandwidth-sensitive hybrid networks, these issues present demanding challenges to Quality of Service (QoS). Traditional network management solutions fail to provide an adaptive response to these dynamic challenges, thereby underscoring the need for innovative solutions. This thesis introduces a novel approach leveraging the concept of Software-Defined Networking (SDN) to establish a dynamic, congestion-aware routing mechanism. This proposed mechanism stands out by comprising a unique strategy of using bandwidth-based measurements, which help accurately detect and localize network congestion. Unlike traditional methodologies that rely on rigid route management, our approach demonstrates dynamic data flow route adjustment. Experimental data indicate promising outcomes with clear improvements in network utilization and application performance. Furthermore, the proposed algorithm exhibits remarkable scalability, providing quick route-finding solutions for various data flows, without impacting system performance. Thus, this thesis contributes to the ongoing discourse on enhancing hybrid network efficiency in challenging conditions, setting the stage for future explorations in this area

    An Unobtrusive Method for Tracking Network Latency in Online Games

    Get PDF
    Online games are a very important class of distributed interactive applications. Their success is heavily dependant on the level of consistency that can be maintained between participants communicating in the virtual world. Achieving a high level of consistency usually involves the transmission of a large amount of network traffic. However, if the underlying network connecting participants is unable to process this traffic, then network latency will increase, which will in turn negatively impact on consistency. Many schemes exist which attempt to reduce network traffic, and thus reduce the effect of network latency on the interactive application. However, applications that employ these schemes tend to do so with little knowledge of the underlying network conditions, and assume a worst-case scenario of limited bandwidth. Such an assumption can actually cause these latency reduction schemes to perform sub-optimally, and ironically introduce more inconsistency than they reduce. Hence, it is important that online game applications become aware of network conditions, such as available bandwidth. Existing methods of estimating bandwidth operate by analysing trends in one-way latency, and require that extra data be transmitted between nodes in order to capture the latency trends. Such an approach does not suit online games, as the extra data requirements could increase network latency, and affect the ability of the application to scale to multiple participants. To deal with this issue, this paper proposes a method by which online games can unobtrusively track one-way network latency. This method requires no time-stamping information to be transmitted between participants and operates using data already being transmitted as part of the online game application, meaning that its impact on the network is minimal. NS2 simulations demonstrate that the trends collected by this method can be used to estimate bandwidth under certain conditions

    EGOIST: Overlay Routing Using Selfish Neighbor Selection

    Full text link
    A foundational issue underlying many overlay network applications ranging from routing to P2P file sharing is that of connectivity management, i.e., folding new arrivals into an existing overlay, and re-wiring to cope with changing network conditions. Previous work has considered the problem from two perspectives: devising practical heuristics for specific applications designed to work well in real deployments, and providing abstractions for the underlying problem that are analytically tractable, especially via game-theoretic analysis. In this paper, we unify these two thrusts by using insights gleaned from novel, realistic theoretic models in the design of Egoist – a prototype overlay routing system that we implemented, deployed, and evaluated on PlanetLab. Using measurements on PlanetLab and trace-based simulations, we demonstrate that Egoist's neighbor selection primitives significantly outperform existing heuristics on a variety of performance metrics, including delay, available bandwidth, and node utilization. Moreover, we demonstrate that Egoist is competitive with an optimal, but unscalable full-mesh approach, remains highly effective under significant churn, is robust to cheating, and incurs minimal overhead. Finally, we discuss some of the potential benefits Egoist may offer to applications.National Science Foundation (CISE/CSR 0720604, ENG/EFRI 0735974, CISE/CNS 0524477, CNS/NeTS 0520166, CNS/ITR 0205294; CISE/EIA RI 0202067; CAREER 04446522); European Commission (RIDS-011923

    Status and Future Perspectives for Lattice Gauge Theory Calculations to the Exascale and Beyond

    Full text link
    In this and a set of companion whitepapers, the USQCD Collaboration lays out a program of science and computing for lattice gauge theory. These whitepapers describe how calculation using lattice QCD (and other gauge theories) can aid the interpretation of ongoing and upcoming experiments in particle and nuclear physics, as well as inspire new ones.Comment: 44 pages. 1 of USQCD whitepapers

    AirSync: Enabling Distributed Multiuser MIMO with Full Spatial Multiplexing

    Full text link
    The enormous success of advanced wireless devices is pushing the demand for higher wireless data rates. Denser spectrum reuse through the deployment of more access points per square mile has the potential to successfully meet the increasing demand for more bandwidth. In theory, the best approach to density increase is via distributed multiuser MIMO, where several access points are connected to a central server and operate as a large distributed multi-antenna access point, ensuring that all transmitted signal power serves the purpose of data transmission, rather than creating "interference." In practice, while enterprise networks offer a natural setup in which distributed MIMO might be possible, there are serious implementation difficulties, the primary one being the need to eliminate phase and timing offsets between the jointly coordinated access points. In this paper we propose AirSync, a novel scheme which provides not only time but also phase synchronization, thus enabling distributed MIMO with full spatial multiplexing gains. AirSync locks the phase of all access points using a common reference broadcasted over the air in conjunction with a Kalman filter which closely tracks the phase drift. We have implemented AirSync as a digital circuit in the FPGA of the WARP radio platform. Our experimental testbed, comprised of two access points and two clients, shows that AirSync is able to achieve phase synchronization within a few degrees, and allows the system to nearly achieve the theoretical optimal multiplexing gain. We also discuss MAC and higher layer aspects of a practical deployment. To the best of our knowledge, AirSync offers the first ever realization of the full multiuser MIMO gain, namely the ability to increase the number of wireless clients linearly with the number of jointly coordinated access points, without reducing the per client rate.Comment: Submitted to Transactions on Networkin
    • …
    corecore