939 research outputs found

    Offloading Content with Self-organizing Mobile Fogs

    Get PDF
    Mobile users in an urban environment access content on the internet from different locations. It is challenging for the current service providers to cope with the increasing content demand from a large number of collocated mobile users. In-network caching to offload content at nodes closer to users alleviate the issue, though efficient cache management is required to find out who should cache what, when and where in an urban environment, given nodes limited computing, communication and caching resources. To address this, we first define a novel relation between content popularity and availability in the network and investigate a node's eligibility to cache content based on its urban reachability. We then allow nodes to self-organize into mobile fogs to increase the distributed cache and maximize content availability in a cost-effective manner. However, to cater rational nodes, we propose a coalition game for the nodes to offer a maximum "virtual cache" assuming a monetary reward is paid to them by the service/content provider. Nodes are allowed to merge into different spatio-temporal coalitions in order to increase the distributed cache size at the network edge. Results obtained through simulations using realistic urban mobility trace validate the performance of our caching system showing a ratio of 60-85% of cache hits compared to the 30-40% obtained by the existing schemes and 10% in case of no coalition

    Scalable cooperative caching algorithm based on bloom filters

    Get PDF
    This thesis presents the design, implementation and evaluation of a novel cooperative caching algorithm based on the bloom filter data structure. The new algorithm uses a decentralized approach to resolve the problems that prevent the existing solutions from being scalable. The problems consist of an overloaded manager, a communication overhead among clients, and a memory overhead on the global cache. The new solution reduces the manager load and the communication overhead by distributing the global cache information among cooperating clients. Thus, the manager no longer maintains the global cache. Furthermore, the memory overhead is decreased due to a bloom filter data structure. The bloom filter saves memory space in the global cache and makes the new algorithm scalable. The correctness of the research hypothesis is verified by running experiments on the caching algorithms. The experiment results demonstrate that the new caching algorithm maintains a low block access time as existing algorithms. In addition, the new algorithm decreases the manager load by the factor of nine. Moreover, the communication overhead is reduced by nearly a factor of six as a result of distributing the global cache to clients. Finally, the results show a significant reduction in the memory overhead which also contributes to the scalability of the new algorithm

    Content storage and retrieval mechanisms for vehicular delay-tolerant networks

    Get PDF
    Vehicular delay-tolerant networks (VDTNs) were proposed as a novel disruptive network concept based on the delay tolerant networking (DTN) paradigm. VDTN architecture uses vehicles to relay messages, enabling network connectivity in challenging scenarios. Due to intermittent connectivity, network nodes carry messages in their buffers, relaying them only when a proper contact opportunity occurs. Thus, the storage capacity and message retrieving of intermediate nodes directly affects the network performance. Therefore, efficient and robust caching and forwarding mechanisms are needed. This dissertation proposes a content storage and retrieval (CSR) solution for VDTN networks. This solution consists on storage and retrieval control labels, attached to every data bundle of aggregated network traffic. These labels define cacheable contents, and apply cachecontrol and forwarding restrictions on data bundles. The presented mechanisms gathered several contributions from cache based technologies such as Web cache schemes, ad-hoc and DTN networks. This solution is fully automated, providing a fast, safe, and reliable data transfer and storage management, while improves the applicability and performance of VDTN networks significantly. This work presents the performance evaluation and validation of CSR mechanisms through a VDTN testbed. Furthermore it presents several network performance evaluations and results using the well-known DTN routing protocols, Epidemic and Spray and Wait (including its binary variant). The comparison of the network behavior and performance on both protocols, with and without CSR mechanisms, proves that CSR mechanisms improve significantly the overall network performance

    ๋ฐ์ดํ„ฐ ์ง‘์•ฝ์  ์‘์šฉ์˜ ํšจ์œจ์ ์ธ ์‹œ์Šคํ…œ ์ž์› ํ™œ์šฉ์„ ์œ„ํ•œ ๋ฉ”๋ชจ๋ฆฌ ์„œ๋ธŒ์‹œ์Šคํ…œ ์ตœ์ ํ™”

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2020. 8. ์—ผํ—Œ์˜.With explosive data growth, data-intensive applications, such as relational database and key-value storage, have been increasingly popular in a variety of domains in recent years. To meet the growing performance demands of data-intensive applications, it is crucial to efficiently and fully utilize memory resources for the best possible performance. However, general-purpose operating systems (OSs) are designed to provide system resources to applications running on a system in a fair manner at system-level. A single application may find it difficult to fully exploit the systems best performance due to this system-level fairness. For performance reasons, many data-intensive applications implement their own mechanisms that OSs already provide, under the assumption that they know better about the data than OSs. They can be greedily optimized for performance but this may result in inefficient use of system resources. In this dissertation, we claim that simple OS support with minor application modifications can yield even higher application performance without sacrificing system-level resource utilization. We optimize and extend OS memory subsystem for better supporting applications while addressing three memory-related issues in data-intensive applications. First, we introduce a memory-efficient cooperative caching approach between application and kernel buffer to address double caching problem where the same data resides in multiple layers. Second, we present a memory-efficient, transparent zero-copy read I/O scheme to avoid the performance interference problem caused by memory copy behavior during I/O. Third, we propose a memory-efficient fork-based checkpointing mechanism for in-memory database systems to mitigate the memory footprint problem of the existing fork-based checkpointing scheme; memory usage increases incrementally (up to 2x) during checkpointing for update-intensive workloads. To show the effectiveness of our approach, we implement and evaluate our schemes on real multi-core systems. The experimental results demonstrate that our cooperative approach can more effectively address the above issues related to data-intensive applications than existing non-cooperative approaches while delivering better performance (in terms of transaction processing speed, I/O throughput, or memory footprint).์ตœ๊ทผ ํญ๋ฐœ์ ์ธ ๋ฐ์ดํ„ฐ ์„ฑ์žฅ๊ณผ ๋”๋ถˆ์–ด ๋ฐ์ดํ„ฐ๋ฒ ์ด์Šค, ํ‚ค-๋ฐธ๋ฅ˜ ์Šคํ† ๋ฆฌ์ง€ ๋“ฑ์˜ ๋ฐ์ดํ„ฐ ์ง‘์•ฝ์ ์ธ ์‘์šฉ๋“ค์ด ๋‹ค์–‘ํ•œ ๋„๋ฉ”์ธ์—์„œ ์ธ๊ธฐ๋ฅผ ์–ป๊ณ  ์žˆ๋‹ค. ๋ฐ์ดํ„ฐ ์ง‘์•ฝ์ ์ธ ์‘์šฉ์˜ ๋†’์€ ์„ฑ๋Šฅ ์š”๊ตฌ๋ฅผ ์ถฉ์กฑํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ์ฃผ์–ด์ง„ ๋ฉ”๋ชจ๋ฆฌ ์ž์›์„ ํšจ์œจ์ ์ด๊ณ  ์™„๋ฒฝํ•˜๊ฒŒ ํ™œ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•˜๋‹ค. ๊ทธ๋Ÿฌ๋‚˜, ๋ฒ”์šฉ ์šด์˜์ฒด์ œ(OS)๋Š” ์‹œ์Šคํ…œ์—์„œ ์ˆ˜ํ–‰ ์ค‘์ธ ๋ชจ๋“  ์‘์šฉ๋“ค์— ๋Œ€ํ•ด ์‹œ์Šคํ…œ ์ฐจ์›์—์„œ ๊ณตํ‰ํ•˜๊ฒŒ ์ž์›์„ ์ œ๊ณตํ•˜๋Š” ๊ฒƒ์„ ์šฐ์„ ํ•˜๋„๋ก ์„ค๊ณ„๋˜์–ด์žˆ๋‹ค. ์ฆ‰, ์‹œ์Šคํ…œ ์ฐจ์›์˜ ๊ณตํ‰์„ฑ ์œ ์ง€๋ฅผ ์œ„ํ•œ ์šด์˜์ฒด์ œ ์ง€์›์˜ ํ•œ๊ณ„๋กœ ์ธํ•ด ๋‹จ์ผ ์‘์šฉ์€ ์‹œ์Šคํ…œ์˜ ์ตœ๊ณ  ์„ฑ๋Šฅ์„ ์™„์ „ํžˆ ํ™œ์šฉํ•˜๊ธฐ ์–ด๋ ต๋‹ค. ์ด๋Ÿฌํ•œ ์ด์œ ๋กœ, ๋งŽ์€ ๋ฐ์ดํ„ฐ ์ง‘์•ฝ์  ์‘์šฉ์€ ์šด์˜์ฒด์ œ์—์„œ ์ œ๊ณตํ•˜๋Š” ๊ธฐ๋Šฅ์— ์˜์ง€ํ•˜์ง€ ์•Š๊ณ  ๋น„์Šทํ•œ ๊ธฐ๋Šฅ์„ ์‘์šฉ ๋ ˆ๋ฒจ์— ๊ตฌํ˜„ํ•˜๊ณค ํ•œ๋‹ค. ์ด๋Ÿฌํ•œ ์ ‘๊ทผ ๋ฐฉ๋ฒ•์€ ํƒ์š•์ ์ธ ์ตœ์ ํ™”๊ฐ€ ๊ฐ€๋Šฅํ•˜๋‹ค๋Š” ์ ์—์„œ ์„ฑ๋Šฅ ์ƒ ์ด๋“์ด ์žˆ์„ ์ˆ˜ ์žˆ์ง€๋งŒ, ์‹œ์Šคํ…œ ์ž์›์˜ ๋น„ํšจ์œจ์ ์ธ ์‚ฌ์šฉ์„ ์ดˆ๋ž˜ํ•  ์ˆ˜ ์žˆ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์šด์˜์ฒด์ œ์˜ ์ง€์›๊ณผ ์•ฝ๊ฐ„์˜ ์‘์šฉ ์ˆ˜์ •๋งŒ์œผ๋กœ๋„ ๋น„ํšจ์œจ์ ์ธ ์‹œ์Šคํ…œ ์ž์› ์‚ฌ์šฉ ์—†์ด ๋ณด๋‹ค ๋†’์€ ์‘์šฉ ์„ฑ๋Šฅ์„ ๋ณด์ผ ์ˆ˜ ์žˆ์Œ์„ ์ฆ๋ช…ํ•˜๊ณ ์ž ํ•œ๋‹ค. ๊ทธ๋Ÿฌ๊ธฐ ์œ„ํ•ด ์šด์˜์ฒด์ œ์˜ ๋ฉ”๋ชจ๋ฆฌ ์„œ๋ธŒ์‹œ์Šคํ…œ์„ ์ตœ์ ํ™” ๋ฐ ํ™•์žฅํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์ง‘์•ฝ์ ์ธ ์‘์šฉ์—์„œ ๋ฐœ์ƒํ•˜๋Š” ์„ธ ๊ฐ€์ง€ ๋ฉ”๋ชจ๋ฆฌ ๊ด€๋ จ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜์˜€๋‹ค. ์ฒซ์งธ, ๋™์ผํ•œ ๋ฐ์ดํ„ฐ๊ฐ€ ์—ฌ๋Ÿฌ ๊ณ„์ธต์— ์กด์žฌํ•˜๋Š” ์ค‘๋ณต ์บ์‹ฑ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ์‘์šฉ๊ณผ ์ปค๋„ ๋ฒ„ํผ ๊ฐ„์— ๋ฉ”๋ชจ๋ฆฌ ํšจ์œจ์ ์ธ ํ˜‘๋ ฅ ์บ์‹ฑ ๋ฐฉ์‹์„ ์ œ์‹œํ•˜์˜€๋‹ค. ๋‘˜์งธ, ์ž…์ถœ๋ ฅ์‹œ ๋ฐœ์ƒํ•˜๋Š” ๋ฉ”๋ชจ๋ฆฌ ๋ณต์‚ฌ๋กœ ์ธํ•œ ์„ฑ๋Šฅ ๊ฐ„์„ญ ๋ฌธ์ œ๋ฅผ ํ”ผํ•˜๊ธฐ ์œ„ํ•ด ๋ฉ”๋ชจ๋ฆฌ ํšจ์œจ์ ์ธ ๋ฌด๋ณต์‚ฌ ์ฝ๊ธฐ ์ž…์ถœ๋ ฅ ๋ฐฉ์‹์„ ์ œ์‹œํ•˜์˜€๋‹ค. ์…‹์งธ, ์ธ-๋ฉ”๋ชจ๋ฆฌ ๋ฐ์ดํ„ฐ๋ฒ ์ด์Šค ์‹œ์Šคํ…œ์„ ์œ„ํ•œ ๋ฉ”๋ชจ๋ฆฌ ํšจ์œจ์ ์ธ fork ๊ธฐ๋ฐ˜ ์ฒดํฌํฌ์ธํŠธ ๊ธฐ๋ฒ•์„ ์ œ์•ˆํ•˜์—ฌ ๊ธฐ์กด ํฌํฌ ๊ธฐ๋ฐ˜ ์ฒดํฌํฌ์ธํŠธ ๊ธฐ๋ฒ•์—์„œ ๋ฐœ์ƒํ•˜๋Š” ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ๋Ÿ‰ ์ฆ๊ฐ€ ๋ฌธ์ œ๋ฅผ ์™„ํ™”ํ•˜์˜€๋‹ค; ๊ธฐ์กด ๋ฐฉ์‹์€ ์—…๋ฐ์ดํŠธ ์ง‘์•ฝ์  ์›Œํฌ๋กœ๋“œ์— ๋Œ€ํ•ด ์ฒดํฌํฌ์ธํŒ…์„ ์ˆ˜ํ–‰ํ•˜๋Š” ๋™์•ˆ ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ๋Ÿ‰์ด ์ตœ๋Œ€ 2๋ฐฐ๊นŒ์ง€ ์ ์ง„์ ์œผ๋กœ ์ฆ๊ฐ€ํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์ œ์•ˆํ•œ ๋ฐฉ๋ฒ•๋“ค์˜ ํšจ๊ณผ๋ฅผ ์ฆ๋ช…ํ•˜๊ธฐ ์œ„ํ•ด ์‹ค์ œ ๋ฉ€ํ‹ฐ ์ฝ”์–ด ์‹œ์Šคํ…œ์— ๊ตฌํ˜„ํ•˜๊ณ  ๊ทธ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜์˜€๋‹ค. ์‹คํ—˜๊ฒฐ๊ณผ๋ฅผ ํ†ตํ•ด ์ œ์•ˆํ•œ ํ˜‘๋ ฅ์  ์ ‘๊ทผ๋ฐฉ์‹์ด ๊ธฐ์กด์˜ ๋น„ํ˜‘๋ ฅ์  ์ ‘๊ทผ๋ฐฉ์‹๋ณด๋‹ค ๋ฐ์ดํ„ฐ ์ง‘์•ฝ์  ์‘์šฉ์—๊ฒŒ ํšจ์œจ์ ์ธ ๋ฉ”๋ชจ๋ฆฌ ์ž์› ํ™œ์šฉ์„ ๊ฐ€๋Šฅํ•˜๊ฒŒ ํ•จ์œผ๋กœ์จ ๋” ๋†’์€ ์„ฑ๋Šฅ์„ ์ œ๊ณตํ•  ์ˆ˜ ์žˆ์Œ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค.Chapter 1 Introduction 1 1.1 Motivation 1 1.1.1 Importance of Memory Resources 1 1.1.2 Problems 2 1.2 Contributions 5 1.3 Outline 6 Chapter 2 Background 7 2.1 Linux Kernel Memory Management 7 2.1.1 Page Cache 7 2.1.2 Page Reclamation 8 2.1.3 Page Table and TLB Shootdown 9 2.1.4 Copy-on-Write 10 2.2 Linux Support for Applications 11 2.2.1 fork 11 2.2.2 madvise 11 2.2.3 Direct I/O 12 2.2.4 mmap 13 Chapter 3 Memory Efficient Cooperative Caching 14 3.1 Motivation 14 3.1.1 Problems of Existing Datastore Architecture 14 3.1.2 Proposed Architecture 17 3.2 Related Work 17 3.3 Design and Implementation 19 3.3.1 Overview 19 3.3.2 Kernel Support 24 3.3.3 Migration to DBIO 25 3.4 Evaluation 27 3.4.1 System Configuration 27 3.4.2 Methodology 28 3.4.3 TPC-C Benchmarks 30 3.4.4 YCSB Benchmarks 32 3.5 Summary 37 Chapter 4 Memory Efficient Zero-copy I/O 38 4.1 Motivation 38 4.1.1 The Problems of Copy-Based I/O 38 4.2 Related Work 40 4.2.1 Zero Copy I/O 40 4.2.2 TLB Shootdown 42 4.2.3 Copy-on-Write 43 4.3 Design and Implementation 44 4.3.1 Prerequisites for z-READ 44 4.3.2 Overview of z-READ 45 4.3.3 TLB Shootdown Optimization 48 4.3.4 Copy-on-Write Optimization 52 4.3.5 Implementation 55 4.4 Evaluation 55 4.4.1 System Configurations 56 4.4.2 Effectiveness of the TLB Shootdown Optimization 57 4.4.3 Effectiveness of CoW Optimization 59 4.4.4 Analysis of the Performance Improvement 62 4.4.5 Performance Interference Intensity 63 4.4.6 Effectiveness of z-READ in Macrobenchmarks 65 4.5 Summary 67 Chapter 5 Memory Efficient Fork-based Checkpointing 69 5.1 Motivation 69 5.1.1 Fork-based Checkpointing 69 5.1.2 Approach 71 5.2 Related Work 73 5.3 Design and Implementation 74 5.3.1 Overview 74 5.3.2 OS Support 78 5.3.3 Implementation 79 5.4 Evaluation 80 5.4.1 Experimental Setup 80 5.4.2 Performance 81 5.5 Summary 86 Chapter 6 Conclusion 87 ์š”์•ฝ 100Docto

    Building Internet caching systems for streaming media delivery

    Get PDF
    The proxy has been widely and successfully used to cache the static Web objects fetched by a client so that the subsequent clients requesting the same Web objects can be served directly from the proxy instead of other sources faraway, thus reducing the server\u27s load, the network traffic and the client response time. However, with the dramatic increase of streaming media objects emerging on the Internet, the existing proxy cannot efficiently deliver them due to their large sizes and client real time requirements.;In this dissertation, we design, implement, and evaluate cost-effective and high performance proxy-based Internet caching systems for streaming media delivery. Addressing the conflicting performance objectives for streaming media delivery, we first propose an efficient segment-based streaming media proxy system model. This model has guided us to design a practical streaming proxy, called Hyper-Proxy, aiming at delivering the streaming media data to clients with minimum playback jitter and a small startup latency, while achieving high caching performance. Second, we have implemented Hyper-Proxy by leveraging the existing Internet infrastructure. Hyper-Proxy enables the streaming service on the common Web servers. The evaluation of Hyper-Proxy on the global Internet environment and the local network environment shows it can provide satisfying streaming performance to clients while maintaining a good cache performance. Finally, to further improve the streaming delivery efficiency, we propose a group of the Shared Running Buffers (SRB) based proxy caching techniques to effectively utilize proxy\u27s memory. SRB algorithms can significantly reduce the media server/proxy\u27s load and network traffic and relieve the bottlenecks of the disk bandwidth and the network bandwidth.;The contributions of this dissertation are threefold: (1) we have studied several critical performance trade-offs and provided insights into Internet media content caching and delivery. Our understanding further leads us to establish an effective streaming system optimization model; (2) we have designed and evaluated several efficient algorithms to support Internet streaming content delivery, including segment caching, segment prefetching, and memory locality exploitation for streaming; (3) having addressed several system challenges, we have successfully implemented a real streaming proxy system and deployed it in a large industrial enterprise

    Dynamic organization schemes for cooperative proxy caching

    Get PDF
    In a generic cooperative caching architecture, web proxies form a mesh network. When a proxy cannot satisfy a request, it forwards the request to the other nodes of the mesh. Since a local cache cannot fulfill the majority of the arriving requests (typical values of the local hit ratio are about 30-50%), the volume of queries diverted to neighboring nodes can substantially grow and may consume considerable amount of system resources. A proxy does not need to cooperate with every node of the mesh due to the following reasons: (i) the traffic characteristics may be highly diverse; (ii) the contents of some nodes may extensively overlap; (iii) the inter-node distance might be too large. Furthermore, organizing N proxies in a mesh topology introduces scalability problems, since the number of queries is of the order of N/sup 2/. Therefore, restricting the number of neighbors for each proxy to k < N - 1 will likely lead to a balanced trade-off between query overhead and hit ratio, provided cooperation is done among useful neighbors. For a number of reasons the selection of useful neighbors is not efficient. An obvious reason is that web access patterns change dynamically. Furthermore, availability of proxies is not always globally known. This paper proposes a set of algorithms that enable proxies to independently explore the network and choose the k most beneficial (according to local criteria) neighbors in a dynamic fashion. The simulation experiments illustrate that the proposed dynamic neighbor reconfiguration schemes significantly reduce the overhead incurred by the mesh topology while yielding higher hit ratios compared to the static approach.published_or_final_versio

    Improving the Performance of the Distributed File System through Anticipated Parallel Processing

    Get PDF
    In the emerging Big Data scenario, distributed File systems (DFSs) are used for storing and accessing information in a scalable manner. Many cloud computing systems use DFS as the main storage component. The Big Data applications de-ployed in cloud computing systems more frequently perform read operations and less frequently the write operations. So, improving the performance of read access has become an im-portant research issue in DFS. In the literature, many client side caching with appropriate pre fetching techniques are proposed for improving the performance read access in the DFS. A speculation-based approach which uses client side caching is also proposed in the literature for improving the performance of read access in the DFS. In this paper, we have proposed a new read algorithm for the DFS based on anticipated parallel processing. We have evaluated the per- formance of the proposed algorithm using mathematical and simulation methods and the results indicate that the pro-posed algorithm performs better than the speculation-based algorithm proposed in the literature
    • โ€ฆ
    corecore