10 research outputs found

    Keep Your Nice Friends Close, but Your Rich Friends Closer -- Computation Offloading Using NFC

    Full text link
    The increasing complexity of smartphone applications and services necessitate high battery consumption but the growth of smartphones' battery capacity is not keeping pace with these increasing power demands. To overcome this problem, researchers gave birth to the Mobile Cloud Computing (MCC) research area. In this paper we advance on previous ideas, by proposing and implementing the first known Near Field Communication (NFC)-based computation offloading framework. This research is motivated by the advantages of NFC's short distance communication, with its better security, and its low battery consumption. We design a new NFC communication protocol that overcomes the limitations of the default protocol; removing the need for constant user interaction, the one-way communication restraint, and the limit on low data size transfer. We present experimental results of the energy consumption and the time duration of two computationally intensive representative applications: (i) RSA key generation and encryption, and (ii) gaming/puzzles. We show that when the helper device is more powerful than the device offloading the computations, the execution time of the tasks is reduced. Finally, we show that devices that offload application parts considerably reduce their energy consumption due to the low-power NFC interface and the benefits of offloading.Comment: 9 pages, 4 tables, 13 figure

    ๋จธ์‹  ๋Ÿฌ๋‹ ์–ดํ”Œ๋ฆฌ์ผ€์ด์…˜์„ ์œ„ํ•œ ์Šค๋ƒ…์ƒท ๊ธฐ๋ฐ˜์˜ ์—ฐ์‚ฐ ์˜คํ”„๋กœ๋”ฉ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (์„์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ •๋ณด๊ณตํ•™๋ถ€, 2017. 8. ๋ฌธ์ˆ˜๋ฌต.๋จธ์‹ ๋Ÿฌ๋‹ ๊ธฐ์ˆ ์€ ๋ฐ์ดํ„ฐ๋ฅผ ํ•™์Šตํ•˜๊ณ  ๋ฌธ์ œ์˜ ๋‹ต์„ ์ถ”๋ก ํ•˜๊ธฐ ์œ„ํ•ด ๋ณต์žกํ•œ ์—ฐ์‚ฐ๊ณผ ๋ฐฉ๋Œ€ํ•œ ๋ฐ์ดํ„ฐ๋ฅผ ์š”๊ตฌํ•œ๋‹ค. ์ด๋Ÿฐ ๋จธ์‹ ๋Ÿฌ๋‹ ๊ธฐ์ˆ ์„ ์ €์‚ฌ์–‘ ์ž„๋ฒ ๋””๋“œ ๊ธฐ๊ธฐ์—์„œ ํ™œ์šฉํ•˜๊ธฐ ์œ„ํ•ด ์˜คํ”„๋กœ๋”ฉ ๊ธฐ๋ฐ˜ ๋จธ์‹ ๋Ÿฌ๋‹์ด ์ œ์•ˆ๋˜์—ˆ๋‹ค. ์—ฐ์‚ฐ ์˜คํ”„๋กœ๋”ฉ์ด๋ž€ ์ž„๋ฒ ๋””๋“œ ๊ธฐ๊ธฐ์—์„œ ๋ณต์žกํ•œ ์—ฐ์‚ฐ์„ ๋™์ ์œผ๋กœ ์„œ๋ฒ„๋ฅผ ํ†ตํ•ด ์ˆ˜ํ–‰ํ•˜๋Š” ๋ฐฉ์‹์ด๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์›น ์–ดํ”Œ๋ฆฌ์ผ€์ด์…˜์„ ๋Œ€์ƒ์œผ๋กœ ์Šค๋ƒ…์ƒท ๊ธฐ๋ฐ˜ ์—ฐ์‚ฐ ์˜คํ”„๋กœ๋”ฉ์„ ์‚ฌ์šฉํ•˜์˜€๋‹ค. ์Šค๋ƒ…์ƒท์ด๋ž€ ์ˆ˜ํ–‰ ์ค‘์ธ ์›น ์–ดํ”Œ๋ฆฌ์ผ€์ด์…˜์˜ ์ƒํƒœ๋ฅผ ๋˜ ๋‹ค๋ฅธ ์›น ์–ดํ”Œ๋ฆฌ์ผ€์ด์…˜์˜ ํ˜•ํƒœ๋กœ ์ €์žฅํ•˜๊ณ  ๋ณต์›ํ•˜๋Š” ๊ธฐ์ˆ ์ด๋‹ค. ์Šค๋ƒ…์ƒท ๊ธฐ๋ฐ˜ ์—ฐ์‚ฐ ์˜คํ”„๋กœ๋”ฉ์„ ๋จธ์‹ ๋Ÿฌ๋‹ ์›น ์–ดํ”Œ๋ฆฌ์ผ€์ด์…˜์— ์ ์šฉ ์‹œ ๋‘ ๊ฐ€์ง€ ์ด์Šˆ๊ฐ€ ๋ฐœ์ƒํ•œ๋‹ค. ํ•˜๋‚˜๋Š” ์›น์—์„œ ์ด๋ฏธ์ง€๋ฅผ ์ฒ˜๋ฆฌํ•˜๋Š” ์บ”๋ฒ„์Šค ๊ฐ์ฒด์˜ ์ „์†ก ๋ฌธ์ œ์ด๋ฉฐ ๋‹ค๋ฅธ ํ•˜๋‚˜๋Š” ํฌ๊ธฐ๊ฐ€ ํฐ ๋จธ์‹ ๋Ÿฌ๋‹ ๋ชจ๋ธ ์ „์†ก ๋ฌธ์ œ์ด๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ๋‘ ๊ฐ€์ง€ ์ด์Šˆ๋ฅผ ํ•ด๊ฒฐํ•˜์—ฌ ์Šค๋ƒ…์ƒท ๊ธฐ๋ฐ˜ ์˜คํ”„๋กœ๋”ฉ์„ ํ†ตํ•œ ๋จธ์‹  ๋Ÿฌ๋‹ ์›น ์–ดํ”Œ๋ฆฌ์ผ€์ด์…˜์˜ ์˜ฌ๋ฐ”๋ฅธ ๋™์ž‘๊ณผ ์„ฑ๋Šฅ ํ–ฅ์ƒ์„ ํ™•์ธํ•˜์˜€๋‹ค. ๋‘ ๊ฐ€์ง€ ์ด์Šˆ๋ฅผ ํ•ด๊ฒฐํ•˜์—ฌ ์‹ค์ œ ๋จธ์‹ ๋Ÿฌ๋‹ ์›น ์–ดํ”Œ๋ฆฌ์ผ€์ด์…˜์—์„œ ์ถ”๋ก  ์‹œ๊ฐ„์„ ์ธก์ •ํ•˜์˜€๋‹ค. ์ธก์ • ๊ฒฐ๊ณผ ํด๋ผ์ด์–ธํŠธ ์ˆ˜ํ–‰ ์‹œ๊ฐ„ ๋Œ€๋น„ ์˜คํ”„๋กœ๋”ฉ ์‹œ 3-3.5๋ฐฐ ์„ฑ๋Šฅ ํ–ฅ์ƒ์„ ํ™•์ธํ•˜์˜€๋‹ค.์ œ 1 ์žฅ ์„œ๋ก  1 ์ œ 2 ์žฅ ์Šค๋ƒ…์ƒท ๊ธฐ๋ฐ˜ ์—ฐ์‚ฐ ์˜คํ”„๋กœ๋”ฉ 3 ์ œ 1 ์ ˆ ์Šค๋ƒ…์ƒท 3 ์ œ 2 ์ ˆ ์Šค๋ƒ…์ƒท ๊ธฐ๋ฐ˜ ์—ฐ์‚ฐ ์˜คํ”„๋กœ๋”ฉ 3 ์ œ 3 ์žฅ ์บ”๋ฒ„์Šค ์ €์žฅ ๋ฐฉ๋ฒ• 8 ์ œ 1 ์ ˆ ImageData 9 ์ œ 2 ์ ˆ Rendering Context 11 ์ œ 4 ์žฅ ๋จธ์‹ ๋Ÿฌ๋‹ ๋ชจ๋ธ ์ „์†ก ๋ฐฉ๋ฒ• 12 ์ œ 5 ์žฅ ์‹คํ—˜ ๋ฐ ๊ฒฐ๊ณผ 14 ์ œ 1 ์ ˆ ์‹คํ—˜ ํ™˜๊ฒฝ 14 ์ œ 2 ์ ˆ ์บ”๋ฒ„์Šค ์ €์žฅ ๋ฐฉ๋ฒ•์— ๋”ฐ๋ฅธ ์ธก์ • ๊ฒฐ๊ณผ 14 ์ œ 3 ์ ˆ ๋ชจ๋ธ ์ „์†ก ๋ฐฉ๋ฒ•์— ๋”ฐ๋ฅธ ์ธก์ • ๊ฒฐ๊ณผ 16 ์ œ 6 ์žฅ ๊ฒฐ๋ก  18 ์ฐธ๊ณ ๋ฌธํ—Œ 19Maste

    An SOA-Based Framework of Computational Offloading for Mobile Cloud Computing

    Get PDF
    Mobile Computing is a technology that allows transmission of audio, video, and other types of data via a computer or any other wireless-enabled device without having to be connected to a fixed physical link. Despite increasing usage of mobile computing, exploiting its full potential is difficult due to its inherent problems such as resource scarcity, connection instability, and limited computational power. In particular, the advent of connecting mobile devices to the internet offers the possibility of offloading computation and data intensive tasks from mobile devices to remote cloud servers for efficient execution. This proposed thesis develops an algorithm that uses an objective function to adaptively decide strategies for computational offloading according to changing context information. By following the style of Service-Oriented Architecture (SOA), the proposed framework brings cloud computing to mobile devices for mobile applications to benefit from remote execution of tasks in the cloud. This research discusses the algorithm and framework, along with the results of the experiments with a newly developed system for self-driving vehicles and points out the anticipated advantages of Adaptive Computational Offloading

    MobiCOP: A Scalable and Reliable Mobile Code Offloading Solution

    Get PDF

    A Survey of Performance Optimization for Mobile Applications

    Get PDF
    Nowadays there is a mobile application for almost everything a user may think of, ranging from paying bills and gathering information to playing games and watching movies. In order to ensure user satisfaction and success of applications, it is important to provide high performant applications. This is particularly important for resource constraint systems such as mobile devices. Thereby, non-functional performance characteristics, such as energy and memory consumption, play an important role for user satisfaction. This paper provides a comprehensive survey of non-functional performance optimization for Android applications. We collected 155 unique publications, published between 2008 and 2020, that focus on the optimization of non-functional performance of mobile applications. We target our search at four performance characteristics, in particular: responsiveness, launch time, memory and energy consumption. For each performance characteristic, we categorize optimization approaches based on the method used in the corresponding publications. Furthermore, we identify research gaps in the literature for future work

    ์—ฃ์ง€ ํด๋ผ์šฐ๋“œ ํ™˜๊ฒฝ์„ ์œ„ํ•œ ์—ฐ์‚ฐ ์˜คํ”„๋กœ๋”ฉ ์‹œ์Šคํ…œ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ)--์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› :๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ •๋ณด๊ณตํ•™๋ถ€,2020. 2. ๋ฌธ์ˆ˜๋ฌต.The purpose of my dissertation is to build lightweight edge computing systems which provide seamless offloading services even when users move across multiple edge servers. I focused on two specific application domains: 1) web applications and 2) DNN applications. I propose an edge computing system which offload computations from web-supported devices to edge servers. The proposed system exploits the portability of web apps, i.e., distributed as source code and runnable without installation, when migrating the execution state of web apps. This significantly reduces the complexity of state migration, allowing a web app to migrate within a few seconds. Also, the proposed system supports offloading of webassembly, a standard low-level instruction format for web apps, having achieved up to 8.4x speedup compared to offloading of pure JavaScript codes. I also propose incremental offloading of neural network (IONN), which simultaneously offloads DNN execution while deploying a DNN model, thus reducing the overhead of DNN model deployment. Also, I extended IONN to support large-scale edge server environments by proactively migrating DNN layers to edge servers where mobile users are predicted to visit. Simulation with open-source mobility dataset showed that the proposed system could significantly reduce the overhead of deploying a DNN model.๋ณธ ๋…ผ๋ฌธ์˜ ๋ชฉ์ ์€ ์‚ฌ์šฉ์ž๊ฐ€ ์ด๋™ํ•˜๋Š” ๋™์•ˆ์—๋„ ์›ํ™œํ•œ ์—ฐ์‚ฐ ์˜คํ”„๋กœ๋”ฉ ์„œ๋น„์Šค๋ฅผ ์ œ๊ณตํ•˜๋Š” ๊ฒฝ๋Ÿ‰ ์—ฃ์ง€ ์ปดํ“จํŒ… ์‹œ์Šคํ…œ์„ ๊ตฌ์ถ•ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์›น ์–ดํ”Œ๋ฆฌ์ผ€์ด์…˜๊ณผ ์ธ๊ณต์‹ ๊ฒฝ๋ง (DNN: Deep Neural Network) ์ด๋ผ๋Š” ๋‘ ๊ฐ€์ง€ ์–ดํ”Œ๋ฆฌ์ผ€์ด์…˜ ๋„๋ฉ”์ธ์—์„œ ์—ฐ๊ตฌ๋ฅผ ์ง„ํ–‰ํ–ˆ์Šต๋‹ˆ๋‹ค. ์ฒซ์งธ, ์›น ์ง€์› ์žฅ์น˜์—์„œ ์—ฃ์ง€ ์„œ๋ฒ„๋กœ ์—ฐ์‚ฐ์„ ์˜คํ”„๋กœ๋“œํ•˜๋Š” ์—ฃ์ง€ ์ปดํ“จํŒ… ์‹œ์Šคํ…œ์„ ์ œ์•ˆํ•ฉ๋‹ˆ๋‹ค. ์ œ์•ˆ๋œ ์‹œ์Šคํ…œ์€ ์›น ์•ฑ์˜ ์‹คํ–‰ ์ƒํƒœ๋ฅผ ๋งˆ์ด๊ทธ๋ ˆ์ด์…˜ ํ•  ๋•Œ ์›น ์•ฑ์˜ ๋†’์€ ์ด์‹์„ฑ(์†Œ์Šค ์ฝ”๋“œ๋กœ ๋ฐฐํฌ๋˜๊ณ  ์„ค์น˜ํ•˜์ง€ ์•Š๊ณ  ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Œ)์„ ํ™œ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฅผ ํ†ตํ•ด ์ƒํƒœ ๋งˆ์ด๊ทธ๋ ˆ์ด์…˜์˜ ๋ณต์žก์„ฑ์ด ํฌ๊ฒŒ ์ค„์—ฌ์„œ ์›น ์•ฑ์ด ๋ช‡ ์ดˆ ๋‚ด์— ๋งˆ์ด๊ทธ๋ ˆ์ด์…˜ ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ, ์ œ์•ˆ๋œ ์‹œ์Šคํ…œ์€ ์›น ์–ดํ”Œ๋ฆฌ์ผ€์ด์…˜์„ ์œ„ํ•œ ํ‘œ์ค€ ์ €์ˆ˜์ค€ ์ธ์ŠคํŠธ๋Ÿญ์…˜์ธ ์›น ์–ด์…ˆ๋ธ”๋ฆฌ ์˜คํ”„๋กœ๋“œ๋ฅผ ์ง€์›ํ•˜์—ฌ ์ˆœ์ˆ˜ํ•œ JavaScript ์ฝ”๋“œ ์˜คํ”„๋กœ๋“œ์™€ ๋น„๊ตํ•˜์—ฌ ์ตœ๋Œ€ 8.4 ๋ฐฐ์˜ ์†๋„ ํ–ฅ์ƒ์„ ๋‹ฌ์„ฑํ–ˆ์Šต๋‹ˆ๋‹ค. ๋‘˜์งธ, DNN ์–ดํ”Œ๋ฆฌ์ผ€์ด์…˜์„ ์—ฃ์ง€ ์„œ๋ฒ„์— ๋ฐฐํฌํ•  ๋•Œ, DNN ๋ชจ๋ธ์„ ์ „์†กํ•˜๋Š” ๋™์•ˆ DNN ์—ฐ์‚ฐ์„ ์˜คํ”„๋กœ๋“œ ํ•˜์—ฌ ๋น ๋ฅด๊ฒŒ ์„ฑ๋Šฅํ–ฅ์ƒ์„ ๋‹ฌ์„ฑํ•  ์ˆ˜ ์žˆ๋Š” ์ ์ง„์  ์˜คํ”„๋กœ๋“œ ๋ฐฉ์‹์„ ์ œ์•ˆํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ, ๋ชจ๋ฐ”์ผ ์‚ฌ์šฉ์ž๊ฐ€ ๋ฐฉ๋ฌธ ํ•  ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒ๋˜๋Š” ์—ฃ์ง€ ์„œ๋ฒ„๋กœ DNN ๋ ˆ์ด์–ด๋ฅผ ์‚ฌ์ „์— ๋งˆ์ด๊ทธ๋ ˆ์ด์…˜ํ•˜์—ฌ ์ฝœ๋“œ ์Šคํƒ€ํŠธ ์„ฑ๋Šฅ์„ ํ–ฅ์ƒ์‹œํ‚ค๋Š” ๋ฐฉ์‹์„ ์ œ์•ˆ ํ•ฉ๋‹ˆ๋‹ค. ์˜คํ”ˆ ์†Œ์Šค ๋ชจ๋นŒ๋ฆฌํ‹ฐ ๋ฐ์ดํ„ฐ์…‹์„ ์ด์šฉํ•œ ์‹œ๋ฎฌ๋ ˆ์ด์…˜์—์„œ, DNN ๋ชจ๋ธ์„ ๋ฐฐํฌํ•˜๋ฉด์„œ ๋ฐœ์ƒํ•˜๋Š” ์„ฑ๋Šฅ ์ €ํ•˜๋ฅผ ์ œ์•ˆ ํ•˜๋Š” ๋ฐฉ์‹์ด ํฌ๊ฒŒ ์ค„์ผ ์ˆ˜ ์žˆ์Œ์„ ํ™•์ธํ•˜์˜€์Šต๋‹ˆ๋‹ค.Chapter 1. Introduction 1 1.1 Offloading Web App Computations to Edge Servers 1 1.2 Offloading DNN Computations to Edge Servers 3 Chapter 2. Seamless Offloading of Web App Computations 7 2.1 Motivation: Computation-Intensive Web Apps 7 2.2 Mobile Web Worker System 10 2.2.1 Review of HTML5 Web Worker 10 2.2.2 Mobile Web Worker System 11 2.3 Migrating Web Worker 14 2.3.1 Runtime State of Web Worker 15 2.3.2 Snapshot of Mobile Web Worker 16 2.3.3 End-to-End Migration Process 21 2.4 Evaluation 22 2.4.1 Experimental Environment 22 2.4.2 Migration Performance 24 2.4.3 Application Execution Performance 27 Chapter 3. IONN: Incremental Offloading of Neural Network Computations 30 3.1 Motivation: Overhead of Deploying DNN Model 30 3.2 Background 32 3.2.1 Deep Neural Network 33 3.2.2 Offloading of DNN Computations 33 3.3 IONN For DNN Edge Computing 35 3.4 DNN Partitioning 37 3.4.1 Neural Network (NN) Execution Graph 38 3.4.2 Partitioning Algorithm 40 3.4.3 Handling DNNs with Multiple Paths. 43 3.5 Evaluation 45 3.5.1 Experimental Environment 45 3.5.2 DNN Query Performance 46 3.5.3 Accuracy of Prediction Functions 48 3.5.4 Energy Consumption. 49 Chapter 4. PerDNN: Offloading DNN Computations to Pervasive Edge Servers 51 4.1 Motivation: Cold Start Issue 51 4.2 Proposed Offloading System: PerDNN 52 4.2.1 Edge Server Environment 53 4.2.2 Overall Architecture 54 4.2.3 GPU-aware DNN Partitioning 56 4.2.4 Mobility Prediction 59 4.3 Evaluation 63 4.3.1 Performance Gain of Single Client 64 4.3.2 Large-Scale Simulation 65 Chapter 5. RelatedWorks 73 Chapter 6. Conclusion. 78 Chapter 5. RelatedWorks 73 Chapter 6. Conclusion 78 Bibliography 80Docto

    Enhancing Mobile Capacity through Generic and Efficient Resource Sharing

    Get PDF
    Mobile computing devices are becoming indispensable in every aspect of human life, but diverse hardware limits make current mobile devices far from ideal for satisfying the performance requirements of modern mobile applications and being used anytime, anywhere. Mobile Cloud Computing (MCC) could be a viable solution to bypass these limits which enhances the mobile capacity through cooperative resource sharing, but is challenging due to the heterogeneity of mobile devices in both hardware and software aspects. Traditional schemes either restrict to share a specific type of hardware resource within individual applications, which requires tremendous reprogramming efforts; or disregard the runtime execution pattern and transmit too much unnecessary data, resulting in bandwidth and energy waste.To address the aforementioned challenges, we present three novel designs of resource sharing frameworks which utilize the various system resources from a remote or personal cloud to enhance the mobile capacity in a generic and efficient manner. First, we propose a novel method-level offloading methodology to run the mobile computational workload on the remote cloud CPU. Minimized data transmission is achieved during such offloading by identifying and selectively migrating the memory contexts which are necessary to the method execution. Second, we present a systematic framework to maximize the mobile performance of graphics rendering with the remote cloud GPU, during which the redundant pixels across consecutive frames are reused to reduce the transmitted frame data. Last, we propose to exploit the unified mobile OS services and generically interconnect heterogeneous mobile devices towards a personal mobile cloud, which complement and flexibly share mobile peripherals (e.g., sensors, camera) with each other
    corecore