818 research outputs found
Interposing Flash between Disk and DRAM to Save Energy for Streaming Workloads
In computer systems, the storage hierarchy, composed of a disk drive and a DRAM, is responsible for a large portion of the total energy consumed. This work studies the energy merit of interposing flash memory as a streaming buffer between the disk drive and the DRAM. Doing so, we extend the spin-off period of the disk drive and cut down on the DRAM capacity at the cost of (extra) flash.\ud
\ud
We study two different streaming applications: mobile multimedia players and media servers. Our simulated results show that for light workloads, a system with a flash as a buffer between the disk and the DRAM consumes up to 40% less energy than the same system without a flash buffer. For heavy workloads savings of at least 30% are possible. We also address the wear-out of flash and present a simple solution to extend its lifetime
Evaluation of Tradeoffs in Resource Management Techniques for Multimedia Storage Servers
Many modern applications can benefit from
sharing of resources such as network bandwidth,
disk bandwidth, and so on. In addition, many
information systems store (or would like to store)
data that can be of use to many different classes
of applications, e.g., digital libraries type systems.
Part of the difficulty in efficient resource management
of such systems can then occur when these applications
have vastly different performance and
quality-of-service (QoS) requirements as well as
resource demand characteristics. In this work we
present a performance study of a multimedia storage
system which serves multiple types of workloads,
specifically a mixture of real-time and non-real-time
workloads, by allowing sharing of resources among these
different workloads while satisfying their performance
requirements and QoS constraints. The broad aim of this
work is to examine the issues and tradeoffs associated
with mixing multiple workloads on the same server to
explore the possibility of maintaining reasonable
performance and QoS requirements without having to
partition the resources. The main contribution of this
work is the exposition of the tradeoffs involved in
resource management in such systems. Although many
different resources can be considered, here
we concentrate mostly on the I/O bandwidth resource.
The performance metrics of interest are the mean
and variance of the response time for the non-real-time
applications and the probability of missing a deadline
for the real-time applications. The increased use of
buffer space resources is also considered as a tradeoff
for improvements in the above stated performance
metrics, i.e., response time and probability of missing
deadlines.
(Also cross-referenced as UMIACS-TR-98-30
Building Internet caching systems for streaming media delivery
The proxy has been widely and successfully used to cache the static Web objects fetched by a client so that the subsequent clients requesting the same Web objects can be served directly from the proxy instead of other sources faraway, thus reducing the server\u27s load, the network traffic and the client response time. However, with the dramatic increase of streaming media objects emerging on the Internet, the existing proxy cannot efficiently deliver them due to their large sizes and client real time requirements.;In this dissertation, we design, implement, and evaluate cost-effective and high performance proxy-based Internet caching systems for streaming media delivery. Addressing the conflicting performance objectives for streaming media delivery, we first propose an efficient segment-based streaming media proxy system model. This model has guided us to design a practical streaming proxy, called Hyper-Proxy, aiming at delivering the streaming media data to clients with minimum playback jitter and a small startup latency, while achieving high caching performance. Second, we have implemented Hyper-Proxy by leveraging the existing Internet infrastructure. Hyper-Proxy enables the streaming service on the common Web servers. The evaluation of Hyper-Proxy on the global Internet environment and the local network environment shows it can provide satisfying streaming performance to clients while maintaining a good cache performance. Finally, to further improve the streaming delivery efficiency, we propose a group of the Shared Running Buffers (SRB) based proxy caching techniques to effectively utilize proxy\u27s memory. SRB algorithms can significantly reduce the media server/proxy\u27s load and network traffic and relieve the bottlenecks of the disk bandwidth and the network bandwidth.;The contributions of this dissertation are threefold: (1) we have studied several critical performance trade-offs and provided insights into Internet media content caching and delivery. Our understanding further leads us to establish an effective streaming system optimization model; (2) we have designed and evaluated several efficient algorithms to support Internet streaming content delivery, including segment caching, segment prefetching, and memory locality exploitation for streaming; (3) having addressed several system challenges, we have successfully implemented a real streaming proxy system and deployed it in a large industrial enterprise
A survey on cost-effective context-aware distribution of social data streams over energy-efficient data centres
Social media have emerged in the last decade as a viable and ubiquitous means of communication. The ease of user content generation within these platforms, e.g. check-in information, multimedia data, etc., along with the proliferation of Global Positioning System (GPS)-enabled, always-connected capture devices lead to data streams of unprecedented amount and a radical change in information sharing. Social data streams raise a variety of practical challenges, including derivation of real-time meaningful insights from effectively gathered social information, as well as a paradigm shift for content distribution with the leverage of contextual data associated with user preferences, geographical characteristics and devices in general. In this article we present a comprehensive survey that outlines the state-of-the-art situation and organizes challenges concerning social media streams and the infrastructure of the data centres supporting the efficient access to data streams in terms of content distribution, data diffusion, data replication, energy efficiency and network infrastructure. We systematize the existing literature and proceed to identify and analyse the main research points and industrial efforts in the area as far as modelling, simulation and performance evaluation are concerned
Hybrid Job-driven Scheduling for Virtual MapReduce Clusters
It is cost-efficient for a tenant with a limited budget to establish a
virtual MapReduce cluster by renting multiple virtual private servers (VPSs)
from a VPS provider. To provide an appropriate scheduling scheme for this type
of computing environment, we propose in this paper a hybrid job-driven
scheduling scheme (JoSS for short) from a tenant's perspective. JoSS provides
not only job level scheduling, but also map-task level scheduling and
reduce-task level scheduling. JoSS classifies MapReduce jobs based on job scale
and job type and designs an appropriate scheduling policy to schedule each
class of jobs. The goal is to improve data locality for both map tasks and
reduce tasks, avoid job starvation, and improve job execution performance. Two
variations of JoSS are further introduced to separately achieve a better
map-data locality and a faster task assignment. We conduct extensive
experiments to evaluate and compare the two variations with current scheduling
algorithms supported by Hadoop. The results show that the two variations
outperform the other tested algorithms in terms of map-data locality,
reduce-data locality, and network overhead without incurring significant
overhead. In addition, the two variations are separately suitable for different
MapReduce-workload scenarios and provide the best job performance among all
tested algorithms.Comment: 13 pages and 17 figure
RADIO: managing the performance of large, distributed storage systems
Els sistemes informàtics d’altes prestacions continuen creixent en grandària i complexitat, i sovint han de gestionar moltes tasques diferents simultàniament. El
subsistema d’entrada i sortida és freqüentment un coll d’ampolla per al rendiment general del sistema, i les interferències entre aplicacions poden conduir a la degradació desproporcionada de les prestacions, a temps d’execució impredictibles i a l’ús ineficient dels recursos. Aquesta xerrada presenta la nostra recerca en curs sobre com s’ha de gestionar i garantir l’execució de grans sistemes d’emmagatzematge distribuït. Discutirem el nostre model general per a la gestió del rendiment, supervisarem les nostres solucions per a la UCP (unitat central de processament), el disc, la xarxa, l’emmagatzematge i el servidor de memòria cau, i discutirem la nostra recerca,
encaminada a aplicar aquestes solucions per al control i la gestió de sistemes distribuïts
Challenges in real-time virtualization and predictable cloud computing
Cloud computing and virtualization technology have revolutionized general-purpose computing applications in the past decade. The cloud paradigm offers advantages through reduction of operation costs, server consolidation, flexible system configuration and elastic resource provisioning. However, despite the success of cloud computing for general-purpose computing, existing cloud computing and virtualization technology face tremendous challenges in supporting emerging soft real-time applications such as online video streaming, cloud-based gaming, and telecommunication management. These applications demand real-time performance in open, shared and virtualized computing environments. This paper identifies the technical challenges in supporting real-time applications in the cloud, surveys recent advancement in real-time virtualization and cloud computing technology, and offers research directions to enable cloud-based real-time applications in the future
- …