10 research outputs found

    Delphi: A Software Controller for Mobile Network Selection

    Get PDF
    This paper presents Delphi, a mobile software controller that helps applications select the best network among available choices for their data transfers. Delphi optimizes a specified objective such as transfer completion time, or energy per byte transferred, or the monetary cost of a transfer. It has four components: a performance predictor that uses features gathered by a network monitor, and a traffic profiler to estimate transfer sizes near the start of a transfer, all fed into a network selector that uses the prediction and transfer size estimate to optimize an objective.For each transfer, Delphi either recommends the best single network to use, or recommends Multi-Path TCP (MPTCP), but crucially selects the network for MPTCP s primary subflow . The choice of primary subflow has a strong impact onthe transfer completion time, especially for short transfers.We designed and implemented Delphi in Linux. It requires no application modifications. Our evaluation shows that Delphi reduces application network transfer time by 46% for Web browsing and by 49% for video streaming, comparedwith Android s default policy of always using Wi-Fi when it is available. Delphi can also be configured to achieve high throughput while being battery-efficient: in this configuration, it achieves 1.9x the throughput of Android s default policy while only consuming 6% more energy

    Mukautuvien videon toisto algoritmien evaluointi obiilipilivipelaamisessa

    Get PDF
    Mobile cloud gaming has recently gained popularity as a result of improvements in the quality of internet connections and mobile networks. Under stable conditions, current LTE networks can provide a suitable platform for the demanding requirements of mobile cloud gaming. However, since the quality of mobile network connections constantly change, the network may be unable to always provide the best possible service to all clients. Thus, the ability to adapt is necessary for a mobile cloud gaming platform in order to compensate for changing bandwidth conditions in mobile networks. One approach for doing this is to change the quality of the video stream to match the available bandwidth of the network. This thesis evaluates an adaptive streaming method implemented on a mobile cloud gaming platform called GamingAnywhere and provides an alternative approach for estimating the available bandwidth by measuring the signal strength values of a mobile device. Experimentation was conducted in a real LTE network to determine the best approach in reconfiguring the encoder of the video stream to match the bandwidth of the network. The results show that increasing the constant-rate-factor parameter of the video encoder by 12 reduces the necessary bandwidth to about half. Thus, changing this video encoder parameter provides an effective means to compensate for significant changes in the bandwidth. However, high values of the constant-rate-factor parameter can considerably reduce the quality of the video stream. Thus, the frame rate of the video should be lowered if the constant-rate-factor already has a high value.Mobiilipilvipelaaminen on viimeaikoina kerännyt suosiota parantuneiden internet yhteyksien ja mobiiliverkkojen ansioista. Normaali olosuhteissa nykyiset LTE verkot tarjoavat sopivan alustan mobiili pilvipelaamisen koviin vaatimuksiin. Mobiiliverkkojen yhteyden laatu kuitenkin vaihtelee jatkuvasti ja kaikille käyttäjille ei voida aina tarjota parasta mahdollista yhteyttä. Mukautuminen vaihtelevaan yhteyden laatuun on siis tarpeellista pilvipelaamisalustalle. Tämän voi tehdä esimerkiksi muuttamalla videon kuvanlaatua sopivaksi käytössä olevaan kaistaan. Tässä työssä arvioidaan GamingAnywhere alustalle toteutettu mukautuva videon toistomenetelmä ja esitellään vaihtoehtoinen tapa arvioida käytettävissä olevaa kaistaa mittaamalla mobiilisignaalin vahvuutta mobiililaitteessa. Aidossa LTE verkossa suoritettujen kokeiden avulla selvitettiin paras tapa konfiguroida video enkooderi mukautumaan käytettävissä olevaan kaistan määrään. Tuloksista selviää, että constant-rate-factor-parametrin arvon nostaminen kahdellatoista laskee tarvittavan kaistan määrän noin puoleen. Se on siis tehokkain tapa mukautua merkittäviin muutoksiin kaistan leveydessä. Liian suuret constant-rate-factor-parametrin arvot kuitenkin heikentävät kuvanlaatua merkittävästi, joten kuvataajuutta voi myös alentaa jos parametrin arvo on jo liian suuri

    무선 네트워크에서의 TCP 성능 향상 기법

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2015. 8. 박세웅.TCP (Transmission Control Protocol), one of the most essential protocol for the Internet, has carried the most of the Internet traffic since its birth. With the deployment of various types of wireless networks and proliferation of smart devices, a rapid increase in mobile data traffic volume has been observed and TCP has still carried the majority of mobile traffic, thus leading to huge attention again on TCP performance in wireless networks. In this dissertation, we tackle three different problems that aim to improve TCP performance in wireless networks. Firstly, we dealt with the downstream bufferbloat problem in wireless access networks such as LTE and Wi-Fi. We clarify the downstream bufferbloat problem in resource competitive environments such as Wi-Fi, and design a receiver-side countermeasure for easy deployment that does not require any modification at the sender or intermediate routers. Exploiting TCP and AQM dynamics, our scheme competes for shared resource in a fair manner with conventional TCP flow control methods and prevents bufferbloat. We implement our scheme in commercial smart devices and verify its performance through real experiments in LTE and Wi-Fi networks. Secondly, we consider the upstream bufferbloat problem in LTE networks. We clarify that the upstream bufferbloat problem can significantly degrade multitasking users QoE in LTE networks and design a packet scheduler that aims to separate delay-sensitive packets from non delay-sensitive packets without computational overhead. We implement the proposed packet scheduler in commercial smart devices and evaluate the performance of our proposed scheme through real experiments in LTE networks. Lastly, we investigate the TCP fairness problem in low-power and lossy networks (LLNs). We confirm severe throughput unfairness among nodes with different hop counts and propose dynamic TX period adjustment scheme to enhance TCP fairness in LLNs. Through experiments on the testbed, we evaluate how much the proposed scheme enhances fairness index.1 Introduction 1 1.1 Motivation 1 1.2 Background and Related Work 3 1.3 Outline 7 2 Receiver-side TCP Countermeasure to Bufferbloat in Wireless Access Networks 8 2.1 Introduction 8 2.2 Dynamics of TCP and AQM 11 2.3 Receiver-side TCP Adaptive Queue Control 14 2.3.1 Receiver-side Window Control 15 2.3.2 Delay Measurement and Queue Length Estimation 17 2.3.3 Configuration of RTAC 19 2.4 Experimental Setup and Configuration 20 2.4.1 Receiver Measurement Errors and Configuration 21 2.5 Experimental Results 27 2.5.1 Bufferbloat in Wireless Access Networks 27 2.5.2 Prevention of Bufferbloat 31 2.5.3 Fairness of TCP Flows with Various Receiver Types 32 2.5.4 The Impact of TCP Variants 39 2.5.5 The Impact of Upload Bufferbloat 46 2.5.6 Coexistence with the Unlimited Sender 48 2.6 Summary 48 3 Dual Queue Approach for Improving User QoE in LTE Networks 51 3.1 Introduction 51 3.2 User QoE Degradation in Multitasking Scenarios 54 3.2.1 Unnecessary Large Upload Queueing delay 54 3.2.2 Negative Impact on Performance in Multitasking Scenarios 55 3.3 SOR based Packet Classification with Multiple Transmit Queue 58 3.3.1 Dual Transmit Queue 59 3.3.2 SOR based Packet Classification Algorithm 61 3.4 Experiment Results 63 3.4.1 Packet Classification Metric: Sendbuffer Occupancy Ratio (SOR) 64 3.4.2 Improving RTT performance of Interactive Applications 68 3.4.3 Improving Download Performance 69 3.4.4 Fairness among Competing Upload Flows 71 3.5 Summary 74 4 Uplink Congestion Control in Low-power and Lossy Networks 75 4.1 Introduction 75 4.2 System Model 78 4.3 Proposed Scheme 79 4.3.1 Tx Period 79 4.3.2 Dynamic TX Period Adjustment 80 4.4 Experimental Results 82 4.4.1 Experimental Setup 82 4.4.2 Throughput analysis vs. Measurement 84 4.4.3 TCP Performance in Low-power Lossy Networks 87 4.4.4 Fairness improvement of DTPA 89 4.5 Summary 92 5 Conclusion 93 5.1 Research Contributions 93 5.2 Future Research Directions 95Docto

    4G/5G cellular networks metrology and management

    Get PDF
    La prolifération d'applications et de services sophistiqués s'accompagne de diverses exigences de performances, ainsi que d'une croissance exponentielle du trafic pour le lien montant (uplink) et descendant (downlink). Les réseaux cellulaires tels que 4G et 5G évoluent pour prendre en charge cette quantité diversifiée et énorme de données. Le travail de cette thèse vise le renforcement de techniques avancées de gestion et supervision des réseaux cellulaires prenant l'explosion du trafic et sa diversité comme deux des principaux défis dans ces réseaux. La première contribution aborde l'intégration de l'intelligence dans les réseaux cellulaires via l'estimation du débit instantané sur le lien montant pour de petites granularités temporelles. Un banc d'essai 4G temps réel est déployé dans ce but de fournir un benchmark exhaustif des métriques de l'eNB. Des estimations précises sont ainsi obtenues. La deuxième contribution renforce le découpage 5G en temps réel au niveau des ressources radio dans un système multicellulaire. Pour cela, deux modèles d'optimisation ont été proposés. Du fait de leurs temps d'exécution trop long, des heuristiques ont été développées et évaluées en comparaisons des modèles optimaux. Les résultats sont prometteurs, les deux heuristiques renforçant fortement le découpage du RAN en temps réel.The proliferation of sophisticated applications and services comes with diverse performance requirements as well as an exponential traffic growth for both upload and download. The cellular networks such as 4G and 5G are advocated to support this diverse and huge amount of data. This thesis work targets the enforcement of advanced cellular network supervision and management techniques taking the traffic explosion and diversity as two main challenges in these networks. The first contribution tackles the intelligence integration in cellular networks through the estimation of users uplink instantaneous throughput at small time granularities. A real time 4G testbed is deployed for such aim with an exhaustive metrics benchmark. Accurate estimations are achieved.The second contribution enforces the real time 5G slicing from radio resources perspective in a multi-cell system. For that, two exact optimization models are proposed. Due to their high convergence time, heuristics are developed and evaluated with the optimal models. Results are promising, as two heuristics are highly enforcing the real time RAN slicing

    Reducing Energy Consumption of a Modem via Selective Packet Transmission Delaying

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2017. 2. 홍성수.스마트폰을 비롯한 생활 밀착형 모바일 장치에서 모뎀이 소모하는 전력량을 줄이고자 하는 노력들이 끊임없이 경주되고 있다. 이러한 모뎀의 전력 소모량 절감 방안에 대한 연구들은 주로 패킷 전송 지연 기법에 기반한다. 패킷 전송 지연 기법이란 모바일 장치에서 지연 전송이 가능한 패킷의 전송 요청이 발생하는 경우, 해당 패킷의 전송을 지연시켰다가 이후에 전송 요청된 패킷들과 함께 한 번에 전송하는 기법이다. 이 기법은 모뎀이 여러 개의 패킷을 긴 시간에 걸쳐 산발적으로 전송하는 것에 비해서, 같은 패킷들을 짧은 시간 동안 한꺼번에 전송할 때 훨씬 적은 양의 전력을 소모하는 특성을 가지고 있다는 점에 착안하여 개발된 기법이다. 기존 연구들은 패킷 전송 지연 기법을 모바일 장치의 다양한 동작 상황에서 효과적으로 적용하는 각기 다른 방안들을 제시한다. 그러나 스마트폰과 같은 최신 모바일 장치의 복잡한 동작 환경에서 모뎀의 무선 자원 제어 상태를 치밀하게 고려하지 않고 적용하는 패킷 전송 지연 기법은 오히려 예기치 못한 추가적인 전력 소모를 야기하기도 한다. 이 문제는 기존의 모뎀 전력 소모량 절감 기법들의 효용성을 크게 해침으로써 실제 산업에 적용되기 어렵게 만든다. 본 학위논문에서는 앞서 설명한 문제를 해결하기 위해 선별적 패킷 전송 지연 기법을 제안한다. 제안하는 기법은 패킷 전송을 지연한 결과로 모뎀의 전력 소모량이 줄어들 것으로 예상될 때에만 선별적으로 패킷 전송을 지연한다. 이 기법의 핵심 골자는 패킷 전송 지연에 따른 전력량 이득 추산 모델이다. 이 모델은 모바일 장치에서 어떤 패킷의 전송 요청이 발생했을 때, 해당 시점의 모뎀의 무선 자원 제어 상태와 다음 번 패킷 전송 요청의 예상 발생 시점을 사용하여, 해당 패킷을 지연 전송하는 경우 모뎀의 전력 소모량이 어떻게 변화할지를 계산하는 모델이다. 제안하는 선별적 패킷 전송 지연 기법은 세 개의 핵심 컴포넌트들로 구성된다. 첫 번째는 지연 가능 패킷 판별기(Deferrable Packet Identifier)이다. 이 컴포넌트는 패킷 전송 요청이 발생했을 때, 해당 패킷의 전송 지연을 모바일 장치의 사용자가 인지할 수 있는지 여부를 사용하여 그 패킷의 지연 가능 여부를 판단한다. 두 번째는 패턴 기반 다음 패킷 예측기(Pattern-based Next Packet Predictor)이다. 이 컴포넌트는 사전 학습 단계와 온라인 적용 단계로 나누어 동작한다. 먼저 사전 학습 단계에서는 모바일 장치에서 수행되는 응용들의 패킷 전송 요청들을 모니터링하고, 이를 통해 각 응용의 패킷 전송 요청 패턴들을 도출한다. 이후 온라인 적용 단계에서는 도출된 패턴들을 사용하여 모바일 장치에서 다음 패킷 전송 요청이 언제 발생할지를 예측한다. 마지막 컴포넌트는 패킷 전송 시점 결정기(Packet Transmission Time Designator)이다. 이 컴포넌트는 패킷 전송 요청이 발생했을 때, 앞서 언급한 두 컴포넌트의 수행 결과와 패킷 전송 지연에 따른 전력량 이득 추산 모델을 사용하여 최종적으로 모뎀의 전력 소모량이 절감되도록 해당 패킷의 실제 전송 시점을 결정한다. 본 연구에서 제안한 기법의 효용성을 검증하기 위해서 실제 이동통신 네트워크망에 연결된 상용 스마트폰을 사용하여 다양한 실험을 수행하였다. 구체적으로, KT의 4세대 LTE 네트워크 망에 연결된 Google Nexus 5 스마트폰에 앞서 설명한 세 가지 핵심 컴포넌트들을 구현하고 실험적으로 모뎀의 전력 소모량 절감 효과를 평가하였다. 실험 결과, 제안된 기법을 적용함으로써 모뎀의 전력 소모량이 최대 22.5% 줄어드는 것을 확인하였다. 이러한 결과는 본 논문에서 제안하는 패킷 전송 지연에 따른 전력량 이득 추산 모델과 이를 활용한 선별적 패킷 전송 지연 기법이 최신 모바일 장치에 탑재된 모뎀의 전력 소모량을 줄이는 실용적이고 효과적인 수단임을 보여준다.1. 서론 1 1.1 연구 동기 3 1.2 연구 내용 5 1.3 논문 구성 9 2. 배경 지식과 관련 연구 10 2.1 무선 자원 제어 프로토콜과 비연속적 수신 기법 10 2.2 모뎀의 전력 소모량 절감 기법 19 2.1.1 Tail 시간의 빠른 종료 24 2.2.2 Tail 시간들의 중첩 28 3. 문제 설명과 해결 방안 개관 33 3.1 시스템 모델 33 3.2 문제 설명 39 3.3 해결 방안 개관 45 4. 선별적 패킷 전송 지연 기법 50 4.1 지연 가능 패킷의 판별 50 4.2 모바일 장치의 다음 패킷 전송 요청 시점 예측 53 4.2.1 패킷 전송 요청 그룹의 생성 53 4.2.2 패턴 검출 57 4.2.3 다음 패킷 전송 요청 시점의 예측 63 4.3 예상 전력 이득에 기반한 패킷 전송 시점의 결정 66 5. 구현과 실험적 평가 78 5.1 선별적 패킷 전송 지연 기법의 구현 78 5.2 실험 설정 89 5.3 실험 결과 92 5.4 분석적 평가 99 6. 결론 104 참고 문헌 107 Abstract 115Docto

    Understanding and Improving the Performance of Web Page Loads

    Full text link
    The web is vital to our daily lives, yet web pages are often slow to load. The inefficiency and complexity of loading web pages can be attributed to the dependencies between resources within a web page, which also leads to underutilization of the CPU and network on client devices. My thesis research seeks solutions that enable better use of the client-side CPU and network during page loads. Such solutions can be categorized into three types of approaches: 1) leveraging a proxy to optimize web page loads, 2) modifying the end-to-end interaction between client browsers and web servers, and 3) rewriting web pages. Each approach offers various benefits and trade-offs. This dissertation explores three specific solutions. First, CASPR is a proxy-based solution that enables clients to offload JavaScript computations to proxies. CASPR loads web pages on behalf of clients and transforms every page into a version that is simpler for clients to process, leading to a 1.7s median improvement in web page rendering for popular CASPR web pages. Second, Vroom rethinks how page loads work; in order to minimize dependencies between resources, it enables web servers to provide resource hints to clients and ensures that resources are loaded with proper prioritization. As a result, Vroom halves the median load times for popular news and sports websites. Finally, I conducted a longitudinal study to understand how web pages have changed over time and how these changes have affected performance.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163157/1/vaspol_1.pd

    DETECTION AND ALLEVIATION OF LAST-MILE WIRELESS LINK BOTTLENECKS

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Coordinating Cellular Background Transfers using LoadSense

    No full text
    To minimize battery drain due to background communication in cellular-connected devices such as smartphones, the duration for which the cellular radio is kept active should be minimized. This, in turn, calls for scheduling the background communication so as to maximize the throughput. It has been recognized in prior work that a key determinant of throughput is the wireless link quality. However, as we show here, another key factor is the load in the cell, arising from the communication of other nodes. Unlike link quality, the only way, thus far, for a cellular client to obtain a measure of load has been to perform active probing, which defeats the goal of minimizing the active duration of the radio. In this paper, we address the above dilemma by making the following contributions. First, we show experimentally that to obtain good throughput, considering link quality alone is insufficient, and that cellular load must also be factored in. Second, we present a novel technique called LoadSense for a cellular client to obtain a measure of the cellular load, locally and passively, that allows the client to determine the ideal times for communication when available throughput to the client is likely to be high. Finally, we present the Peek-n-Sneak protocol, which enables a cellular client to “peek” into the channel and “sneak ” in with its background communication when the conditions are suitable. When multiple clients in a cell perform Peen-n-Sneak, it enables them to coordinate their communications, implicitly and in an entirely distributed manner, akin to CSMA in wireless LANs, helping improve throughput (and reduce energy drain) for all. Our experimental evaluation shows overall device energy savings of 20-60 % even when Peek-n-Sneak is deployed incrementally
    corecore