50 research outputs found

    ATP: a Datacenter Approximate Transmission Protocol

    Full text link
    Many datacenter applications such as machine learning and streaming systems do not need the complete set of data to perform their computation. Current approximate applications in datacenters run on a reliable network layer like TCP. To improve performance, they either let sender select a subset of data and transmit them to the receiver or transmit all the data and let receiver drop some of them. These approaches are network oblivious and unnecessarily transmit more data, affecting both application runtime and network bandwidth usage. On the other hand, running approximate application on a lossy network with UDP cannot guarantee the accuracy of application computation. We propose to run approximate applications on a lossy network and to allow packet loss in a controlled manner. Specifically, we designed a new network protocol called Approximate Transmission Protocol, or ATP, for datacenter approximate applications. ATP opportunistically exploits available network bandwidth as much as possible, while performing a loss-based rate control algorithm to avoid bandwidth waste and re-transmission. It also ensures bandwidth fair sharing across flows and improves accurate applications' performance by leaving more switch buffer space to accurate flows. We evaluated ATP with both simulation and real implementation using two macro-benchmarks and two real applications, Apache Kafka and Flink. Our evaluation results show that ATP reduces application runtime by 13.9% to 74.6% compared to a TCP-based solution that drops packets at sender, and it improves accuracy by up to 94.0% compared to UDP

    Evaluation of Shoulder Complex Kinematics Pre and Post Rotator Cuff Repair and Effects of Altering Rotation Sequence

    Get PDF
    Approximately 25% of adults in the United States will sustain a rotator cuff tear in their lifetime. Rotator cuff tears can impede one\u27s ability to perform activities of daily living and maintain functional independence. Nearly 300,000 rotator cuff repair surgeries are performed annually and aim to decrease pain, increase range of motion, and enable return to the workforce.The goal of postoperative rehabilitation is to allow healing of the repaired cuff while restoring function and range of motion, but it is demanding and may require a year. However, most rehabilitation protocols are based solely on clinical experience rather than on evidence-based rationale and the optimal protocol is still being deliberated. Despite a rapid growth in the amount of literature, retear rates are still being reported between 20% and 95% and the knowledge required to improve rehabilitative planning is “seriously deficient”. A key issue is the variability in data reporting, including the presentation of post-operative data without the corresponding pre-operative data. Additionally, better knowledge of scapular and humerus kinematics could improve treatment options. An additional concern is the methodology used for modeling the three-dimensional angles of the glenohumeral and thoracohumeral joints. The International Society of Biomechanics recommended standardized use of the YXY Euler rotation sequence. However, a recent review found less than half of the studies examined adhered to these guidelines. Research has shown that no singular sequence is accurate for all the motions of these joints, but no studies have explored the effects of all 6 Cardan sequences and the recommended YXY rotation on three-dimensional thoracohumeral joint angle calculations during upper extremity movements. This study performs a biomechanical evaluation of individuals with rotator cuff tears pre-operatively, 3-months post-operatively, and 6-months post-operatively to elucidate the changes occurring across the rehabilitation process and when joint motions normalize. This knowledge is critical for the development of interventions to improve the post-operative recovery process, including more targeted exercises, and shortened rehabilitation time. Additionally, the effects of the YXY and 6 Cardan rotation sequences on thoracohumeral joint angle computations will be investigated during a range of shoulder postures to provide recommendations on appropriate use

    On random wiring in practicable folded clos networks for modern datacenters

    Get PDF
    Big scale, high performance and fault-tolerance, low-cost and graceful expandability are pursued features in current datacenter networks (DCN). Although there have been many proposals for DCNs, most modern installations are equipped with classical folded Clos networks. Recently, regular random topologies, as the Jellyfish, have been proposed for DCNs. However, their completely unstructured nature entails serious design problems. In this paper we propose Random Folded Clos (RFC) and Hydra networks in which the interconnection between certain switches levels is made randomly. Both RFCs and Hydras preserve important properties of Clos networks that provide a straightforward deadlock-free multi-path routing. The proposed networks leverage randomness to be gracefully expandable, thereby allowing for fine grain upgrading. RFCs and Hydras are compared in the paper, in topological and cost terms, against fat-trees, orthogonal fat-trees and random regular networks. Also, experiments are carried out to simulate their performance under synthetic traffic patterns emulating common loads present in warehouse scale computers. These theoretical and empirical studies reveal the interest of these topologies, concluding that Hydra constitutes a practicable alternative to current datacenter networks since it appropriately balance all the main design requirements. Moreover, Hydras perform better than the fat-trees, their natural competitor, being able to connect the same or more computing nodes with significant lower cost and latency while exhibiting comparable throughput. © 1990-2012 IEEE

    Recent Advances in Machine Learning for Network Automation in the O-RAN

    Get PDF
    © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY), https://creativecommons.org/licenses/by/4.0/The evolution of network technologies has witnessed a paradigm shift toward open and intelligent networks, with the Open Radio Access Network (O-RAN) architecture emerging as a promising solution. O-RAN introduces disaggregation and virtualization, enabling network operators to deploy multi-vendor and interoperable solutions. However, managing and automating the complex O-RAN ecosystem presents numerous challenges. To address this, machine learning (ML) techniques have gained considerable attention in recent years, offering promising avenues for network automation in O-RAN. This paper presents a comprehensive survey of the current research efforts on network automation using ML in O-RAN. We begin by providing an overview of the O-RAN architecture and its key components, highlighting the need for automation. Subsequently, we delve into O-RAN support for ML techniques. The survey then explores challenges in network automation using ML within the O-RAN environment, followed by the existing research studies discussing application of ML algorithms and frameworks for network automation in O-RAN. The survey further discusses the research opportunities by identifying important aspects where ML techniques can benefit.Peer reviewe

    vrAIn: a deep learning approach tailoring computing and radio resources in virtualized RANs

    Get PDF
    Proceeding of: 25th Annual International Conference on Mobile Computing and Networking (MobiCom'19), October 21-25, 2019, Los Cabos, Mexico.The virtualization of radio access networks (vRAN) is the last milestone in the NFV revolution. However, the complex dependencies between computing and radio resources make vRAN resource control particularly daunting. We present vrAIn, a dynamic resource controller for vRANs based on deep reinforcement learning. First, we use an autoencoder to project high-dimensional context data (traffic and signal quality patterns) into a latent representation. Then, we use a deep deterministic policy gradient (DDPG) algorithm based on an actor-critic neural network structure and a classifier to map (encoded) contexts into resource control decisions. We have implemented vrAIn using an open-source LTE stack over different platforms. Our results show that vrAIn successfully derives appropriate compute and radio control actions irrespective of the platform and context: (i) it provides savings in computational capacity of up to 30% over CPU-unaware methods; (ii) it improves the probability of meeting QoS targets by 25% over static allocation policies using similar CPU resources in average; (iii) upon CPU capacity shortage, it improves throughput performance by 25% over state-of-the-art schemes; and (iv) it performs close to optimal policies resulting from an offline oracle. To the best of our knowledge, this is the first work that thoroughly studies the computational behavior of vRANs, and the first approach to a model-free solution that does not need to assume any particular vRAN platform or system conditions.The work of University Carlos III of Madrid was supported by H2020 5GMoNArch project (grant agreement no. 761445) and H2020 5G-TOURS project (grant agreement no. 856950). The work of NEC Laboratories Europe was supported by H2020 5GTRANSFORMER project (grant agreement no. 761536) and 5GROWTH project (grant agreement no. 856709). The work of University of Cartagena was supported by Grant AEI/FEDER TEC2016-76465-C2-1-R (AIM) and Grant FPU14/03701.Publicad

    EMPoWER Hybrid Networks: Exploiting Multiple Paths over Wireless and ElectRical Mediums

    Get PDF
    Several technologies, such as WiFi, Ethernet and power-line communications (PLC), can be used to build residential and enterprise networks. These technologies often co-exist; most networks use WiFi, and buildings are readily equipped with electrical wires that can offer a capacity up to 1 Gbps with PLC. Yet, current networks do not exploit this rich diversity and often operate far below the available capacity. We design, implement, and evaluate EMPoWER, a system that exploits simultaneously several potentially-interfering mediums. It operates at layer 2.5, between the MAC and IP layers, and combines routing (to find multiple concurrent routes) and congestion control (to efficiently balance traffic across the routes). To optimize resource utilization and robustness, both components exploit the heterogeneous nature of the network. They are fair and efficient, and they operate only within the local area network, without affecting remote Internet hosts. We demonstrate the performance gains of EMPoWER, by simulations and experiments on a 22-node testbed. We show that PLC/WiFi, benefiting from the diversity offered by wireless and electrical mediums, provides significant throughput gains (up to 10x) and improves coverage, compared to multi-channel WiFi

    Resource management with adaptive capacity in C-RAN

    Get PDF
    This work was supported in part by the Spanish ministry of science through the projectRTI2018-099880-B-C32, with ERFD funds, and the Grant FPI-UPC provided by theUPC. It has been done under COST CA15104 IRACON EU project.Efficient computational resource management in 5G Cloud Radio Access Network (CRAN) environments is a challenging problem because it has to account simultaneously for throughput, latency, power efficiency, and optimization tradeoffs. This work proposes the use of a modified and improved version of the realistic Vienna Scenario that was defined in COST action IC1004, to test two different scale C-RAN deployments. First, a large-scale analysis with 628 Macro-cells (Mcells) and 221 Small-cells (Scells) is used to test different algorithms oriented to optimize the network deployment by minimizing delays, balancing the load among the Base Band Unit (BBU) pools, or clustering the Remote Radio Heads (RRH) efficiently to maximize the multiplexing gain. After planning, real-time resource allocation strategies with Quality of Service (QoS) constraints should be optimized as well. To do so, a realistic small-scale scenario for the metropolitan area is defined by modeling the individual time-variant traffic patterns of 7000 users (UEs) connected to different services. The distribution of resources among UEs and BBUs is optimized by algorithms, based on a realistic calculation of the UEs Signal to Interference and Noise Ratios (SINRs), that account for the required computational capacity per cell, the QoS constraints and the service priorities. However, the assumption of a fixed computational capacity at the BBU pools may result in underutilized or oversubscribed resources, thus affecting the overall QoS. As resources are virtualized at the BBU pools, they could be dynamically instantiated according to the required computational capacity (RCC). For this reason, a new strategy for Dynamic Resource Management with Adaptive Computational capacity (DRM-AC) using machine learning (ML) techniques is proposed. Three ML algorithms have been tested to select the best predicting approach: support vector machine (SVM), time-delay neural network (TDNN), and long short-term memory (LSTM). DRM-AC reduces the average of unused resources by 96 %, but there is still QoS degradation when RCC is higher than the predicted computational capacity (PCC). For this reason, two new strategies are proposed and tested: DRM-AC with pre-filtering (DRM-AC-PF) and DRM-AC with error shifting (DRM-AC-ES), reducing the average of unsatisfied resources by 99.9 % and 98 % compared to the DRM-AC, respectively
    corecore