568 research outputs found
Cascading Cache Layer in Content Management System
Caching involves the temporal storing of data in a separate folder. Cascading is the arrangement of something in sequence from top to bottom. Cascading cache layer in content management system places data in layers and sequence in order of importance. The cached data are also removed based on their order of importance. Caching is majorly about input and output of content and data, this brings the need for cascading management system to make accessing data easier than usual. This work takes a look into caching and how it works. It considers various levels of caching in the content management systems. It tries to explain what cascading is in a content management system as well as its importance. This work explains how cascading cache in layers would make it faster and more efficient to access data
A survey of online data-driven proactive 5G network optimisation using machine learning
In the fifth-generation (5G) mobile networks, proactive network optimisation plays an important role in meeting the exponential traffic growth, more stringent service requirements, and to reduce capitaland operational expenditure. Proactive network optimisation is widely acknowledged as on e of the most promising ways to transform the 5G network based on big data analysis and cloud-fog-edge computing, but there are many challenges. Proactive algorithms will require accurate forecasting of highly contextualised traffic demand and quantifying the uncertainty to drive decision making with performance guarantees. Context in Cyber-Physical-Social Systems (CPSS) is often challenging to uncover, unfolds over time, and even more difficult to quantify and integrate into decision making. The first part of the review focuses on mining and inferring CPSS context from heterogeneous data sources, such as online user-generated-content. It will examine the state-of-the-art methods currently employed to infer location, social behaviour, and traffic demand through a cloud-edge computing framework; combining them to form the input to proactive algorithms. The second part of the review focuses on exploiting and integrating the demand knowledge for a range of proactive optimisation techniques, including the key aspects of load balancing, mobile edge caching, and interference management. In both parts, appropriate state-of-the-art machine learning techniques (including probabilistic uncertainty cascades in proactive optimisation), complexity-performance trade-offs, and demonstrative examples are presented to inspire readers. This survey couples the potential of online big data analytics, cloud-edge computing, statistical machine learning, and proactive network optimisation in a common cross-layer wireless framework. The wider impact of this survey includes better cross-fertilising the academic fields of data analytics, mobile edge computing, AI, CPSS, and wireless communications, as well as informing the industry of the promising potentials in this area
์ฃ์ง ํด๋ผ์ฐ๋ ํ๊ฒฝ์ ์ํ ์ฐ์ฐ ์คํ๋ก๋ฉ ์์คํ
ํ์๋
ผ๋ฌธ(๋ฐ์ฌ)--์์ธ๋ํ๊ต ๋ํ์ :๊ณต๊ณผ๋ํ ์ ๊ธฐยท์ ๋ณด๊ณตํ๋ถ,2020. 2. ๋ฌธ์๋ฌต.The purpose of my dissertation is to build lightweight edge computing systems which provide seamless offloading services even when users move across multiple edge servers. I focused on two specific application domains: 1) web applications and 2) DNN applications.
I propose an edge computing system which offload computations from web-supported devices to edge servers. The proposed system exploits the portability of web apps, i.e., distributed as source code and runnable without installation, when migrating the execution state of web apps. This significantly reduces the complexity of state migration, allowing a web app to migrate within a few seconds. Also, the proposed system supports offloading of webassembly, a standard low-level instruction format for web apps, having achieved up to 8.4x speedup compared to offloading of pure JavaScript codes.
I also propose incremental offloading of neural network (IONN), which simultaneously offloads DNN execution while deploying a DNN model, thus reducing the overhead of DNN model deployment. Also, I extended IONN to support large-scale edge server environments by proactively migrating DNN layers to edge servers where mobile users are predicted to visit. Simulation with open-source mobility dataset showed that the proposed system could significantly reduce the overhead of deploying a DNN model.๋ณธ ๋
ผ๋ฌธ์ ๋ชฉ์ ์ ์ฌ์ฉ์๊ฐ ์ด๋ํ๋ ๋์์๋ ์ํํ ์ฐ์ฐ ์คํ๋ก๋ฉ ์๋น์ค๋ฅผ ์ ๊ณตํ๋ ๊ฒฝ๋ ์ฃ์ง ์ปดํจํ
์์คํ
์ ๊ตฌ์ถํ๋ ๊ฒ์
๋๋ค. ์น ์ดํ๋ฆฌ์ผ์ด์
๊ณผ ์ธ๊ณต์ ๊ฒฝ๋ง (DNN: Deep Neural Network) ์ด๋ผ๋ ๋ ๊ฐ์ง ์ดํ๋ฆฌ์ผ์ด์
๋๋ฉ์ธ์์ ์ฐ๊ตฌ๋ฅผ ์งํํ์ต๋๋ค.
์ฒซ์งธ, ์น ์ง์ ์ฅ์น์์ ์ฃ์ง ์๋ฒ๋ก ์ฐ์ฐ์ ์คํ๋ก๋ํ๋ ์ฃ์ง ์ปดํจํ
์์คํ
์ ์ ์ํฉ๋๋ค. ์ ์๋ ์์คํ
์ ์น ์ฑ์ ์คํ ์ํ๋ฅผ ๋ง์ด๊ทธ๋ ์ด์
ํ ๋ ์น ์ฑ์ ๋์ ์ด์์ฑ(์์ค ์ฝ๋๋ก ๋ฐฐํฌ๋๊ณ ์ค์นํ์ง ์๊ณ ์คํํ ์ ์์)์ ํ์ฉํฉ๋๋ค. ์ด๋ฅผ ํตํด ์ํ ๋ง์ด๊ทธ๋ ์ด์
์ ๋ณต์ก์ฑ์ด ํฌ๊ฒ ์ค์ฌ์ ์น ์ฑ์ด ๋ช ์ด ๋ด์ ๋ง์ด๊ทธ๋ ์ด์
๋ ์ ์์ต๋๋ค. ๋ํ, ์ ์๋ ์์คํ
์ ์น ์ดํ๋ฆฌ์ผ์ด์
์ ์ํ ํ์ค ์ ์์ค ์ธ์คํธ๋ญ์
์ธ ์น ์ด์
๋ธ๋ฆฌ ์คํ๋ก๋๋ฅผ ์ง์ํ์ฌ ์์ํ JavaScript ์ฝ๋ ์คํ๋ก๋์ ๋น๊ตํ์ฌ ์ต๋ 8.4 ๋ฐฐ์ ์๋ ํฅ์์ ๋ฌ์ฑํ์ต๋๋ค.
๋์งธ, DNN ์ดํ๋ฆฌ์ผ์ด์
์ ์ฃ์ง ์๋ฒ์ ๋ฐฐํฌํ ๋, DNN ๋ชจ๋ธ์ ์ ์กํ๋ ๋์ DNN ์ฐ์ฐ์ ์คํ๋ก๋ ํ์ฌ ๋น ๋ฅด๊ฒ ์ฑ๋ฅํฅ์์ ๋ฌ์ฑํ ์ ์๋ ์ ์ง์ ์คํ๋ก๋ ๋ฐฉ์์ ์ ์ํฉ๋๋ค. ๋ํ, ๋ชจ๋ฐ์ผ ์ฌ์ฉ์๊ฐ ๋ฐฉ๋ฌธ ํ ๊ฒ์ผ๋ก ์์๋๋ ์ฃ์ง ์๋ฒ๋ก DNN ๋ ์ด์ด๋ฅผ ์ฌ์ ์ ๋ง์ด๊ทธ๋ ์ด์
ํ์ฌ ์ฝ๋ ์คํํธ ์ฑ๋ฅ์ ํฅ์์ํค๋ ๋ฐฉ์์ ์ ์ ํฉ๋๋ค. ์คํ ์์ค ๋ชจ๋น๋ฆฌํฐ ๋ฐ์ดํฐ์
์ ์ด์ฉํ ์๋ฎฌ๋ ์ด์
์์, DNN ๋ชจ๋ธ์ ๋ฐฐํฌํ๋ฉด์ ๋ฐ์ํ๋ ์ฑ๋ฅ ์ ํ๋ฅผ ์ ์ ํ๋ ๋ฐฉ์์ด ํฌ๊ฒ ์ค์ผ ์ ์์์ ํ์ธํ์์ต๋๋ค.Chapter 1. Introduction 1
1.1 Offloading Web App Computations to Edge Servers 1
1.2 Offloading DNN Computations to Edge Servers 3
Chapter 2. Seamless Offloading of Web App Computations 7
2.1 Motivation: Computation-Intensive Web Apps 7
2.2 Mobile Web Worker System 10
2.2.1 Review of HTML5 Web Worker 10
2.2.2 Mobile Web Worker System 11
2.3 Migrating Web Worker 14
2.3.1 Runtime State of Web Worker 15
2.3.2 Snapshot of Mobile Web Worker 16
2.3.3 End-to-End Migration Process 21
2.4 Evaluation 22
2.4.1 Experimental Environment 22
2.4.2 Migration Performance 24
2.4.3 Application Execution Performance 27
Chapter 3. IONN: Incremental Offloading of Neural Network Computations 30
3.1 Motivation: Overhead of Deploying DNN Model 30
3.2 Background 32
3.2.1 Deep Neural Network 33
3.2.2 Offloading of DNN Computations 33
3.3 IONN For DNN Edge Computing 35
3.4 DNN Partitioning 37
3.4.1 Neural Network (NN) Execution Graph 38
3.4.2 Partitioning Algorithm 40
3.4.3 Handling DNNs with Multiple Paths. 43
3.5 Evaluation 45
3.5.1 Experimental Environment 45
3.5.2 DNN Query Performance 46
3.5.3 Accuracy of Prediction Functions 48
3.5.4 Energy Consumption. 49
Chapter 4. PerDNN: Offloading DNN Computations to Pervasive Edge Servers 51
4.1 Motivation: Cold Start Issue 51
4.2 Proposed Offloading System: PerDNN 52
4.2.1 Edge Server Environment 53
4.2.2 Overall Architecture 54
4.2.3 GPU-aware DNN Partitioning 56
4.2.4 Mobility Prediction 59
4.3 Evaluation 63
4.3.1 Performance Gain of Single Client 64
4.3.2 Large-Scale Simulation 65
Chapter 5. RelatedWorks 73
Chapter 6. Conclusion. 78
Chapter 5. RelatedWorks 73
Chapter 6. Conclusion 78
Bibliography 80Docto
From Traditional Adaptive Data Caching to Adaptive Context Caching: A Survey
Context data is in demand more than ever with the rapid increase in the
development of many context-aware Internet of Things applications. Research in
context and context-awareness is being conducted to broaden its applicability
in light of many practical and technical challenges. One of the challenges is
improving performance when responding to large number of context queries.
Context Management Platforms that infer and deliver context to applications
measure this problem using Quality of Service (QoS) parameters. Although
caching is a proven way to improve QoS, transiency of context and features such
as variability, heterogeneity of context queries pose an additional real-time
cost management problem. This paper presents a critical survey of
state-of-the-art in adaptive data caching with the objective of developing a
body of knowledge in cost- and performance-efficient adaptive caching
strategies. We comprehensively survey a large number of research publications
and evaluate, compare, and contrast different techniques, policies, approaches,
and schemes in adaptive caching. Our critical analysis is motivated by the
focus on adaptively caching context as a core research problem. A formal
definition for adaptive context caching is then proposed, followed by
identified features and requirements of a well-designed, objective optimal
adaptive context caching strategy.Comment: This paper is currently under review with ACM Computing Surveys
Journal at this time of publishing in arxiv.or
FogLearn: Leveraging Fog-based Machine Learning for Smart System Big Data Analytics
Big data analytics with the cloud computing are one of the emerging area for processing and analytics. Fog computing is the paradigm where fog devices help to reduce latency and increase throughput for assisting at the edge of the client. This paper discussed the emergence of fog computing for mining analytics in big data from geospatial and medical health applications. This paper proposed and developed fog computing based framework i.e. FogLearn for application of K-means clustering in Ganga River Basin Management and realworld feature data for detecting diabetes patients suffering from diabetes mellitus. Proposed architecture employed machine learning on deep learning framework for analysis of pathological feature data that obtained from smart watches worn by the patients with diabetes and geographical parameters of River Ganga basin geospatial database. The results showed that fog computing hold an immense promise for analysis of medical and geospatial big data
Vehicle as a Service (VaaS): Leverage Vehicles to Build Service Networks and Capabilities for Smart Cities
Smart cities demand resources for rich immersive sensing, ubiquitous
communications, powerful computing, large storage, and high intelligence
(SCCSI) to support various kinds of applications, such as public safety,
connected and autonomous driving, smart and connected health, and smart living.
At the same time, it is widely recognized that vehicles such as autonomous
cars, equipped with significantly powerful SCCSI capabilities, will become
ubiquitous in future smart cities. By observing the convergence of these two
trends, this article advocates the use of vehicles to build a cost-effective
service network, called the Vehicle as a Service (VaaS) paradigm, where
vehicles empowered with SCCSI capability form a web of mobile servers and
communicators to provide SCCSI services in smart cities. Towards this
direction, we first examine the potential use cases in smart cities and
possible upgrades required for the transition from traditional vehicular ad hoc
networks (VANETs) to VaaS. Then, we will introduce the system architecture of
the VaaS paradigm and discuss how it can provide SCCSI services in future smart
cities, respectively. At last, we identify the open problems of this paradigm
and future research directions, including architectural design, service
provisioning, incentive design, and security & privacy. We expect that this
paper paves the way towards developing a cost-effective and sustainable
approach for building smart cities.Comment: 32 pages, 11 figure
- โฆ