839 research outputs found
Recommended from our members
Towards Optimized Traffic Provisioning and Adaptive Cache Management for Content Delivery
Content delivery networks (CDNs) deploy hundreds of thousands of servers around the world to cache and serve trillions of user requests every day for a diverse set of content such as web pages, videos, software downloads and images. In this dissertation, we propose algorithms to provision traffic across cache servers and manage the content they host to achieve performance objectives such as maximizing the cache hit rate, minimizing the bandwidth cost of the network and minimizing the energy consumption of the servers.
Traffic provisioning is the process of determining the set of content domains hosted on the servers. We propose footprint descriptors that effectively capture the popularity characteristics and caching performance of different content classes. We also propose a footprint descriptor calculus that can be used to decide how content should be mixed or partitioned to efficiently provision traffic. To automate traffic provisioning, we propose optimization models to provision traffic such that the cache miss traffic from the network is minimized without overloading the servers. We find that such optimization models produce significant reductions in the cache miss traffic when compared with traffic provisioning algorithms in use today.
Cache management is the process of deciding how content is cached in the servers of a CDN. We propose TTL-based caching algorithms that provably achieve performance targets specified by a CDN operator. We show that the proposed algorithms converge to the target hit rate and target cache size with low error. Finally, we propose cache management algorithms to make the servers energy-efficient using disk shutdown. We find that disk shutdown is well suited for CDN servers and provides energy savings without significantly impacting cache hit rates
Memory resource balancing for virtualized computing
Virtualization has become a common abstraction layer in modern data centers. By multiplexing hardware resources into multiple virtual machines (VMs) and thus enabling several operating systems to run on the same physical platform simultaneously, it can effectively reduce power consumption and building size or improve security by isolating VMs.
In a virtualized system, memory resource management plays a critical role in achieving high resource utilization and performance. Insufficient memory allocation to a VM will degrade its performance dramatically. On the contrary, over-allocation causes waste of memory resources. Meanwhile, a VM’s memory demand may vary significantly. As a result, effective memory resource management calls for a dynamic memory balancer, which, ideally, can adjust memory allocation in a timely manner for each VM based on their current memory demand and thus achieve the best memory utilization and the optimal overall performance.
In order to estimate the memory demand of each VM and to arbitrate possible memory resource contention, a widely proposed approach is to construct an LRU-based miss ratio curve (MRC), which provides not only the current working set size (WSS) but also the correlation between performance and the target memory allocation size. Unfortunately, the cost of constructing an MRC is nontrivial. In this dissertation, we first present a low overhead LRU-based memory demand tracking scheme, which includes three orthogonal optimizations: AVL-based LRU organization, dynamic hot set sizing and intermittent memory tracking. Our evaluation results show that, for the whole SPEC CPU 2006 benchmark suite, after applying the three optimizing techniques, the mean overhead of MRC construction is lowered from 173% to only 2%.
Based on current WSS, we then predict its trend in the near future and take different strategies for different prediction results. When there is a sufficient amount of physical memory on the host, it locally balances its memory resource for the VMs. Once the local memory resource is insufficient and the memory pressure is predicted to sustain for a sufficiently long time, a relatively expensive solution, VM live migration, is used to move one or more VMs from the hot host to other host(s). Finally, for transient memory pressure, a remote cache is used to alleviate the temporary performance penalty. Our experimental results show that this design achieves 49% center-wide speedup
Topics in access, storage, and sensor networks
In the first part of this dissertation, Data Over Cable Service Interface Specification (DOCSIS) and IEEE 802.3ah Ethernet Passive Optical Network (ETON), two access networking standards, are studied. We study the impact of two parameters of the DOCSIS protocol and derive the probability of message collision in the 802.3ah device discovery scheme. We survey existing bandwidth allocation schemes for EPONs, derive the average grant size in one such scheme, and study the performance of the shortest-job-first heuristic.
In the second part of this dissertation, we study networks of mobile sensors. We make progress towards an architecture for disconnected collections of mobile sensors. We propose a new design abstraction called tours which facilitates the combination of mobility and communication into a single design primitive and enables the system of sensors to reorganize into desirable topologies alter failures. We also initiate a study of computation in mobile sensor networks. We study the relationship between two distributed computational models of mobile sensor networks: population protocols and self-similar functions. We define the notion of a self-similar predicate and show when it is computable by a population protocol.
Transition graphs of population protocols lead its to the consideration of graph powers. We consider the direct product of graphs and its new variant which we call the lexicographic direct product (or the clique product). We show that invariants concerning transposable walks in direct graph powers and transposable independent sets in graph families generated by the lexicographic direct product are uncomputable.
The last part of this dissertation makes contributions to the area of storage systems. We propose a sequential access detect ion and prefetching scheme and a dynamic cache sizing scheme for large storage systems. We evaluate the cache sizing scheme theoretically and through simulations. We compute the expected hit ratio of our and competing schemes and bound the expected size of our dynamic cache sufficient to obtain an optimal hit ratio. We also develop a stand-alone simulator for studying our proposed scheme and integrate it with an empirically validated disk simulator
Optimized Pricing Scheme in Cloud Environment Using Dedupication
IAAS environment is referred as resources with VM instanSces. Customers can?t utilize all resource, but provide full charge for allocated storage.And in server side, storage are not utilized, so scalability become degraded. Implement best billing cycle for access and utilize the resources. Data Deduplication is becoming increasingly popular in storage systems as a space-efficient approach to data backup. Present SiLo, a near-exact deduplication system.That effectively and complementarily exploits similarity and locality to achieve high duplicate elimination. The data secure storing and sharing of the files
A unified approach to the performance analysis of caching systems
We propose a unified methodology to analyse the performance of caches (both
isolated and interconnected), by extending and generalizing a decoupling
technique originally known as Che's approximation, which provides very accurate
results at low computational cost. We consider several caching policies, taking
into account the effects of temporal locality. In the case of interconnected
caches, our approach allows us to do better than the Poisson approximation
commonly adopted in prior work. Our results, validated against simulations and
trace-driven experiments, provide interesting insights into the performance of
caching systems.Comment: in ACM TOMPECS 20016. Preliminary version published at IEEE Infocom
201
Run Time Approximation of Non-blocking Service Rates for Streaming Systems
Stream processing is a compute paradigm that promises safe and efficient
parallelism. Modern big-data problems are often well suited for stream
processing's throughput-oriented nature. Realization of efficient stream
processing requires monitoring and optimization of multiple communications
links. Most techniques to optimize these links use queueing network models or
network flow models, which require some idea of the actual execution rate of
each independent compute kernel within the system. What we want to know is how
fast can each kernel process data independent of other communicating kernels.
This is known as the "service rate" of the kernel within the queueing
literature. Current approaches to divining service rates are static. Modern
workloads, however, are often dynamic. Shared cloud systems also present
applications with highly dynamic execution environments (multiple users,
hardware migration, etc.). It is therefore desirable to continuously re-tune an
application during run time (online) in response to changing conditions. Our
approach enables online service rate monitoring under most conditions,
obviating the need for reliance on steady state predictions for what are
probably non-steady state phenomena. First, some of the difficulties associated
with online service rate determination are examined. Second, the algorithm to
approximate the online non-blocking service rate is described. Lastly, the
algorithm is implemented within the open source RaftLib framework for
validation using a simple microbenchmark as well as two full streaming
applications.Comment: technical repor
- …