1,114 research outputs found
Recommended from our members
Multimedia delivery in the future internet
The term “Networked Media” implies that all kinds of media including text, image, 3D graphics, audio
and video are produced, distributed, shared, managed and consumed on-line through various networks,
like the Internet, Fiber, WiFi, WiMAX, GPRS, 3G and so on, in a convergent manner [1]. This white
paper is the contribution of the Media Delivery Platform (MDP) cluster and aims to cover the Networked
challenges of the Networked Media in the transition to the Future of the Internet.
Internet has evolved and changed the way we work and live. End users of the Internet have been confronted
with a bewildering range of media, services and applications and of technological innovations concerning
media formats, wireless networks, terminal types and capabilities. And there is little evidence that the pace
of this innovation is slowing. Today, over one billion of users access the Internet on regular basis, more
than 100 million users have downloaded at least one (multi)media file and over 47 millions of them do so
regularly, searching in more than 160 Exabytes1 of content. In the near future these numbers are expected
to exponentially rise. It is expected that the Internet content will be increased by at least a factor of 6, rising
to more than 990 Exabytes before 2012, fuelled mainly by the users themselves. Moreover, it is envisaged
that in a near- to mid-term future, the Internet will provide the means to share and distribute (new)
multimedia content and services with superior quality and striking flexibility, in a trusted and personalized
way, improving citizens’ quality of life, working conditions, edutainment and safety.
In this evolving environment, new transport protocols, new multimedia encoding schemes, cross-layer inthe
network adaptation, machine-to-machine communication (including RFIDs), rich 3D content as well as
community networks and the use of peer-to-peer (P2P) overlays are expected to generate new models of
interaction and cooperation, and be able to support enhanced perceived quality-of-experience (PQoE) and
innovative applications “on the move”, like virtual collaboration environments, personalised services/
media, virtual sport groups, on-line gaming, edutainment. In this context, the interaction with content
combined with interactive/multimedia search capabilities across distributed repositories, opportunistic P2P
networks and the dynamic adaptation to the characteristics of diverse mobile terminals are expected to
contribute towards such a vision.
Based on work that has taken place in a number of EC co-funded projects, in Framework Program 6 (FP6)
and Framework Program 7 (FP7), a group of experts and technology visionaries have voluntarily
contributed in this white paper aiming to describe the status, the state-of-the art, the challenges and the way
ahead in the area of Content Aware media delivery platforms
Nomadic fog storage
Mobile services incrementally demand for further processing and storage. However,
mobile devices are known for their constrains in terms of processing, storage, and energy.
Early proposals have addressed these aspects; by having mobile devices access remote
clouds. But these proposals suffer from long latencies and backhaul bandwidth limitations
in retrieving data. To mitigate these issues, edge clouds have been proposed. Using this
paradigm, intermediate nodes are placed between the mobile devices and the remote
cloud. These intermediate nodes should fulfill the end users’ resource requests, namely
data and processing capability, and reduce the energy consumption on the mobile devices’
batteries.
But then again, mobile traffic demand is increasing exponentially and there is a greater
than ever evolution of mobile device’s available resources. This urges the use of mobile
nodes’ extra capabilities for fulfilling the requisites imposed by new mobile applications.
In this new scenario, the mobile devices should become both consumers and providers of
the emerging services. The current work researches on this possibility by designing,
implementing and testing a novel nomadic fog storage system that uses fog and mobile
nodes to support the upcoming applications. In addition, a novel resource allocation
algorithm has been developed that considers the available energy on mobile devices and
the network topology. It also includes a replica management module based on data
popularity. The comprehensive evaluation of the fog proposal has evidenced that it is
responsive, offloads traffic from the backhaul links, and enables a fair energy depletion
among mobiles nodes by storing content in neighbor nodes with higher battery autonomy.Os serviços móveis requerem cada vez mais poder de processamento e armazenamento.
Contudo, os dispositivos móveis são conhecidos por serem limitados em termos de
armazenamento, processamento e energia. Como solução, os dispositivos móveis
começaram a aceder a estes recursos através de nuvens distantes. No entanto, estas sofrem
de longas latências e limitações na largura de banda da rede, ao aceder aos recursos. Para
resolver estas questões, foram propostas soluções de edge computing. Estas, colocam nós
intermediários entre os dispositivos móveis e a nuvem remota, que são responsáveis por
responder aos pedidos de recursos por parte dos utilizadores finais.
Dados os avanços na tecnologia dos dispositivos móveis e o aumento da sua utilização,
torna-se cada mais pertinente a utilização destes próprios dispositivos para fornecer os
serviços da nuvem. Desta forma, o dispositivo móvel torna-se consumidor e fornecedor
do serviço nuvem. O trabalho atual investiga esta vertente, implementado e testando um
sistema que utiliza dispositivos móveis e nós no “fog”, para suportar os serviços móveis
emergentes. Foi ainda implementado um algoritmo de alocação de recursos que considera
os níveis de energia e a topologia da rede, bem como um módulo que gere a replicação
de dados no sistema de acordo com a sua popularidade. Os resultados obtidos provam que
o sistema é responsivo, alivia o tráfego nas ligações no core, e demonstra uma distribuição
justa do consumo de energia no sistema através de uma disseminação eficaz de conteúdo
nos nós da periferia da rede mais próximos dos nós consumidores
Monitoring large cloud-based systems
Large scale cloud-based services are built upon a multitude of hardware and software resources, disseminated
in one or multiple data centers. Controlling and managing these resources requires the integration of several
pieces of software that may yield a representative view of the data center status. Today’s both closed and
open-source monitoring solutions fail in different ways, including the lack of scalability, scarce representativity
of global state conditions, inability in guaranteeing persistence in service delivery, and the impossibility of
monitoring multi-tenant applications. In this paper, we present a novel monitoring architecture that addresses
the aforementioned issues. It integrates a hierarchical scheme to monitor the resources in a cluster with a
distributed hash table (DHT) to broadcast system state information among different monitors. This architecture
strives to obtain high scalability, effectiveness and resilience, as well as the possibility of monitoring
services spanning across different clusters or even different data centers of the cloud provider. We evaluate the
scalability of the proposed architecture through a bottleneck analysis achieved by experimental results
Smart PIN: performance and cost-oriented context-aware personal information network
The next generation of networks will involve interconnection of heterogeneous individual
networks such as WPAN, WLAN, WMAN and Cellular network, adopting the IP as common infrastructural protocol and providing virtually always-connected network. Furthermore,
there are many devices which enable easy acquisition and storage of information as pictures, movies, emails, etc. Therefore, the information overload and divergent content’s
characteristics make it difficult for users to handle their data in manual way. Consequently, there is a need for personalised automatic services which would enable data exchange across heterogeneous network and devices. To support these personalised services, user centric approaches
for data delivery across the heterogeneous network are also required.
In this context, this thesis proposes Smart PIN - a novel performance and cost-oriented context-aware Personal Information Network. Smart PIN's architecture is detailed including its network, service and management components. Within the service component, two novel schemes for efficient delivery of context and content data are proposed:
Multimedia Data Replication Scheme (MDRS) and Quality-oriented Algorithm for Multiple-source Multimedia Delivery (QAMMD).
MDRS supports efficient data accessibility among distributed devices using data replication which is based on a utility function and a minimum data set. QAMMD employs a buffer underflow avoidance scheme for streaming, which achieves high multimedia quality without content adaptation to network conditions. Simulation models for MDRS and
QAMMD were built which are based on various heterogeneous network scenarios. Additionally a multiple-source streaming based on QAMMS was implemented as a prototype and tested in an emulated network environment. Comparative tests show that MDRS and QAMMD perform significantly better than other approaches
Self-management for large-scale distributed systems
Autonomic computing aims at making computing systems self-managing by using autonomic managers in order to reduce obstacles caused by management complexity. This thesis presents results of research on self-management for large-scale distributed systems. This research was motivated by the increasing complexity of computing systems and their management.
In the first part, we present our platform, called Niche, for programming self-managing component-based distributed applications. In our work on Niche, we have faced and addressed the following four challenges in achieving
self-management in a dynamic environment characterized by volatile resources and high churn: resource discovery, robust and efficient sensing and actuation, management bottleneck, and scale. We present results of our research
on addressing the above challenges. Niche implements the autonomic computing architecture, proposed by IBM, in a fully decentralized way. Niche supports a network-transparent view of the system architecture simplifying
the design of distributed self-management. Niche provides a concise and expressive API for self-management. The implementation of the platform relies on the scalability and robustness of structured overlay networks. We proceed
by presenting a methodology for designing the management part of a distributed self-managing application. We define design steps that include partitioning of management functions and orchestration of multiple autonomic
managers. In the second part, we discuss robustness of management and data consistency, which are necessary in a distributed system. Dealing with the effect of churn on management increases the complexity of the management logic
and thus makes its development time consuming and error prone. We propose the abstraction of Robust Management Elements, which are able to heal themselves under continuous churn. Our approach is based on replicating a
management element using finite state machine replication with a reconfigurable replica set. Our algorithm automates the reconfiguration (migration) of the replica set in order to tolerate continuous churn. For data consistency, we propose a majority-based distributed key-value store supporting multiple consistency levels that is based on a peer-to-peer network. The store enables the tradeoff between high availability and data consistency. Using majority allows avoiding potential drawbacks of a master-based consistency control, namely, a single-point of failure and a potential performance bottleneck. In the third part, we investigate self-management for Cloud-based storage systems with the focus on elasticity control using elements of control theory and machine learning. We have conducted research on a number of different designs of an elasticity controller, including a State-Space feedback controller and a controller that combines feedback and feedforward control. We describe our experience in designing an elasticity controller for a Cloud-based key-value store using state-space model that enables to trade-off performance for cost. We describe the steps in designing an elasticity controller. We continue by
presenting the design and evaluation of ElastMan, an elasticity controller for Cloud-based elastic key-value stores that combines feedforward and feedback control
An enhanced dynamic replica creation and eviction mechanism in data grid federation environment
Data Grid Federation system is an infrastructure that connects several grid systems, which facilitates sharing of large amount of data, as well as storage and computing resources. The existing mechanisms on data replication focus on finding file values based on the number of files access in deciding which file to replicate, and place new replicas on locations that provide minimum read cost. DRCEM finds file values based on logical dependencies in deciding which file to replicate, and allocates new replicas on locations that provide minimum replica placement cost. This thesis presents an enhanced data replication strategy known as Dynamic Replica Creation and Eviction Mechanism (DRCEM) that utilizes the usage of data grid resources, by allocating appropriate replica sites around the federation. The proposed mechanism uses three schemes: 1) Dynamic Replica Evaluation and Creation Scheme, 2) Replica Placement Scheme, and 3) Dynamic Replica Eviction Scheme. DRCEM was evaluated using OptorSim network simulator based on four performance metrics: 1) Jobs Completion Times, 2) Effective Network Usage, 3) Storage Element Usage, and 4) Computing Element Usage. DRCEM outperforms ELALW and DRCM mechanisms by 30% and 26%, in terms of Jobs Completion Times. In addition, DRCEM consumes less storage compared to ELALW and DRCM by 42% and 40%. However, DRCEM shows lower performance compared to existing mechanisms regarding Computing Element Usage, due to additional computations of files logical dependencies. Results revealed better jobs completion times with lower resource consumption than existing approaches. This research produces three replication schemes embodied in one mechanism that enhances the performance of Data Grid Federation environment. This has contributed to the enhancement of the existing mechanism, which is capable of deciding to either create or evict more than one file during a particular time. Furthermore, files logical dependencies were integrated into the replica creation scheme to evaluate data files more accurately
- …