613 research outputs found

    Protocols for Large Data Transfers over Local Networks

    Get PDF
    Protocols for transmitting large amounts of data over a local area network are analyzed. These protocols are different from most other forms of large-scale transfer protocols in three ways: the definition of the protocol requires the recipient to have sufficient buffers available to receive the data before the transfer takes place; it is assumed that the source and the destination machine are more or less matched in speed; and the protocol is implemented at the network interrupt level and therefore not slowed down by process scheduling delays. The results are based on measurements collected on SUN workstations connected to a 10-Mb Ethernet network using 3-Com interfaces. The derivation of the elapsed time in terms of the network packet error rate is based on the assumption of statistically independent errors

    Data-Efficient Decentralized Visual SLAM

    Full text link
    Decentralized visual simultaneous localization and mapping (SLAM) is a powerful tool for multi-robot applications in environments where absolute positioning systems are not available. Being visual, it relies on cameras, cheap, lightweight and versatile sensors, and being decentralized, it does not rely on communication to a central ground station. In this work, we integrate state-of-the-art decentralized SLAM components into a new, complete decentralized visual SLAM system. To allow for data association and co-optimization, existing decentralized visual SLAM systems regularly exchange the full map data between all robots, incurring large data transfers at a complexity that scales quadratically with the robot count. In contrast, our method performs efficient data association in two stages: in the first stage a compact full-image descriptor is deterministically sent to only one robot. In the second stage, which is only executed if the first stage succeeded, the data required for relative pose estimation is sent, again to only one robot. Thus, data association scales linearly with the robot count and uses highly compact place representations. For optimization, a state-of-the-art decentralized pose-graph optimization method is used. It exchanges a minimum amount of data which is linear with trajectory overlap. We characterize the resulting system and identify bottlenecks in its components. The system is evaluated on publicly available data and we provide open access to the code.Comment: 8 pages, submitted to ICRA 201

    Performance analysis of downlink shared channels in a UMTS network

    Get PDF
    In light of the expected growth in wireless data communications and the commonly anticipated up/downlink asymmetry, we present a performance analysis of downlink data transfer over \textsc{d}ownlink \textsc{s}hared \textsc{ch}annels (\textsc{dsch}s), arguably the most efficient \textsc{umts} transport channel for medium-to-large data transfers. It is our objective to provide qualitative insight in the different aspects that influence the data \textsc{q}uality \textsc{o}f \textsc{s}ervice (\textsc{qos}). As a most principal factor, the data traffic load affects the data \textsc{qos} in two distinct manners: {\em (i)} a heavier data traffic load implies a greater competition for \textsc{dsch} resources and thus longer transfer delays; and {\em (ii)} since each data call served on a \textsc{dsch} must maintain an \textsc{a}ssociated \textsc{d}edicated \textsc{ch}annel (\textsc{a}-\textsc{dch}) for signalling purposes, a heavier data traffic load implies a higher interference level, a higher frame error rate and thus a lower effective aggregate \textsc{dsch} throughput: {\em the greater the demand for service, the smaller the aggregate service capacity.} The latter effect is further amplified in a multicellular scenario, where a \textsc{dsch} experiences additional interference from the \textsc{dsch}s and \textsc{a}-\textsc{dch}s in surrounding cells, causing a further degradation of its effective throughput. Following an insightful two-stage performance evaluation approach, which segregates the interference aspects from the traffic dynamics, a set of numerical experiments is executed in order to demonstrate these effects and obtain qualitative insight in the impact of various system aspects on the data \textsc{qos}

    An Assessment of Data Transfer Performance for Large-Scale Climate Data Analysis and Recommendations for the Data Infrastructure for CMIP6

    Full text link
    We document the data transfer workflow, data transfer performance, and other aspects of staging approximately 56 terabytes of climate model output data from the distributed Coupled Model Intercomparison Project (CMIP5) archive to the National Energy Research Supercomputing Center (NERSC) at the Lawrence Berkeley National Laboratory required for tracking and characterizing extratropical storms, a phenomena of importance in the mid-latitudes. We present this analysis to illustrate the current challenges in assembling multi-model data sets at major computing facilities for large-scale studies of CMIP5 data. Because of the larger archive size of the upcoming CMIP6 phase of model intercomparison, we expect such data transfers to become of increasing importance, and perhaps of routine necessity. We find that data transfer rates using the ESGF are often slower than what is typically available to US residences and that there is significant room for improvement in the data transfer capabilities of the ESGF portal and data centers both in terms of workflow mechanics and in data transfer performance. We believe performance improvements of at least an order of magnitude are within technical reach using current best practices, as illustrated by the performance we achieved in transferring the complete raw data set between two high performance computing facilities. To achieve these performance improvements, we recommend: that current best practices (such as the Science DMZ model) be applied to the data servers and networks at ESGF data centers; that sufficient financial and human resources be devoted at the ESGF data centers for systems and network engineering tasks to support high performance data movement; and that performance metrics for data transfer between ESGF data centers and major computing facilities used for climate data analysis be established, regularly tested, and published

    Convolutional LSTM models to estimate network traffic

    Full text link
    Network utilisation efficiency can, at least in principle, often be improved by dynamically re-configuring routing policies to better distribute on-going large data transfers. Unfortunately, the information necessary to decide on an appropriate reconfiguration - details of on-going and upcoming data transfers such as their source and destination and, most importantly, their volume and duration - is usually lacking. Fortunately, the increased use of scheduled transfer services, such as FTS, makes it possible to collect the necessary information. However, the mere detection and characterisation of larger transfers is not sufficient to predict with confidence the likelihood a network link will become overloaded. In this paper we present the use of LSTM-based models (CNN-LSTM and Conv-LSTM) to effectively estimate future network traffic and so provide a solid basis for formulating a sensible network configuration plan.Comment: vCHEP2021 conference proceeding

    PerfSONAR and OFFN: Diagnosing and Testing Science DMZ Connections for the High-Speed Transfer of Large Datasets on OneOklahoma Friction Free Network

    Get PDF
    The OneOklahoma Friction Free Network (OFFN) is a Science DMZ network that is designed with the purpose of enabling collaboration between high performance computing centers and higher learning institutions from across the state of Oklahoma. OFFN allows researchers the ability to transfer datasets at speeds faster than a standard internet connection by allowing direct connections to servers on OFFN by not having to deal with firewalls. We use PerfSonar to measure the performance of OFFN. A PerfSonar testing server is needed on each OFFN node to monitor the health of the network as a whole. PerfSonar can perform response testing, data throughput, packet loss, and one-way latency testing to determine if there is a issue on the network which could cause problems for the large data transfers across the OFFN network and the researchers who utilize it
    • 

    corecore