3,130 research outputs found
Recommended from our members
Multimedia delivery in the future internet
The term “Networked Media” implies that all kinds of media including text, image, 3D graphics, audio
and video are produced, distributed, shared, managed and consumed on-line through various networks,
like the Internet, Fiber, WiFi, WiMAX, GPRS, 3G and so on, in a convergent manner [1]. This white
paper is the contribution of the Media Delivery Platform (MDP) cluster and aims to cover the Networked
challenges of the Networked Media in the transition to the Future of the Internet.
Internet has evolved and changed the way we work and live. End users of the Internet have been confronted
with a bewildering range of media, services and applications and of technological innovations concerning
media formats, wireless networks, terminal types and capabilities. And there is little evidence that the pace
of this innovation is slowing. Today, over one billion of users access the Internet on regular basis, more
than 100 million users have downloaded at least one (multi)media file and over 47 millions of them do so
regularly, searching in more than 160 Exabytes1 of content. In the near future these numbers are expected
to exponentially rise. It is expected that the Internet content will be increased by at least a factor of 6, rising
to more than 990 Exabytes before 2012, fuelled mainly by the users themselves. Moreover, it is envisaged
that in a near- to mid-term future, the Internet will provide the means to share and distribute (new)
multimedia content and services with superior quality and striking flexibility, in a trusted and personalized
way, improving citizens’ quality of life, working conditions, edutainment and safety.
In this evolving environment, new transport protocols, new multimedia encoding schemes, cross-layer inthe
network adaptation, machine-to-machine communication (including RFIDs), rich 3D content as well as
community networks and the use of peer-to-peer (P2P) overlays are expected to generate new models of
interaction and cooperation, and be able to support enhanced perceived quality-of-experience (PQoE) and
innovative applications “on the move”, like virtual collaboration environments, personalised services/
media, virtual sport groups, on-line gaming, edutainment. In this context, the interaction with content
combined with interactive/multimedia search capabilities across distributed repositories, opportunistic P2P
networks and the dynamic adaptation to the characteristics of diverse mobile terminals are expected to
contribute towards such a vision.
Based on work that has taken place in a number of EC co-funded projects, in Framework Program 6 (FP6)
and Framework Program 7 (FP7), a group of experts and technology visionaries have voluntarily
contributed in this white paper aiming to describe the status, the state-of-the art, the challenges and the way
ahead in the area of Content Aware media delivery platforms
DeMMon Decentralized Management and Monitoring Framework
The centralized model proposed by the Cloud computing paradigm mismatches the decentralized
nature of mobile and IoT applications, given the fact that most of the data
production and consumption is performed by end-user devices outside of the Data Center
(DC). As the number of these devices grows, and given the need to transport data to and
from DCs for computation, application providers incur additional infrastructure costs,
and end-users incur delays when performing operations.
These reasons have led us into a post-cloud era, where a new computing paradigm
arose: Edge Computing. Edge Computing takes into account the broad spectrum of
devices residing outside of the DC, closer to the clients, as potential targets for computations,
potentially reducing infrastructure costs, improving the quality of service (QoS)
for end-users and allowing new interaction paradigms between users and applications.
Managing and monitoring the execution of these devices raises new challenges previously
unaddressed by Cloud computing, given the scale of these systems and the devices’
(potentially) unreliable data connections and heterogenous computational power. The
study of the state-of-the-art has revealed that existing resource monitoring and management
solutions require manual configuration and have centralized components, which
we believe do not scale for larger-scale systems.
In this work, we address these limitations by presenting a novel Decentralized Management
and Monitoring (“DeMMon”) system, targeted for edge settings. DeMMon provides
primitives to ease the development of tools that manage computational resources
that support edge-enabled applications, decomposed in components, through decentralized
actions, taking advantage of partial knowledge of the system. Our solution was
evaluated to amount to its benefits regarding information dissemination and monitoring
capabilities across a set of realistic emulated scenarios of up to 750 nodes with variable
failure rates. The results show the validity of our approach and that it can outperform
state-of-the-art solutions regarding scalability and reliabilityO modelo centralizado de computação utilizado no paradigma da Computação na Nuvem
apresenta limitações no contexto de aplicações no domĂnio da Internet das Coisas
e aplicações móveis. Neste tipo de aplicações, os dados são produzidos e consumidos
maioritariamente por dispositivos que se encontram na periferia da rede. Desta forma,
transportar estes dados de e para os centros de dados impõe uma carga excessiva nas
infraestruturas de rede que ligam os dispositivos aos centros de dados, aumentando a
latência de respostas e diminuindo a qualidade de serviço para os utilizadores.
Para combater estas limitações, surgiu o paradigma da Computação na Periferia, este
paradigma propõe a execução de computações, e potencialmente armazenamento de
dados, em dispositivos fora dos centros de dados, mais perto dos clientes, reduzindo
custos e criando um novo leque de possibilidades para efetuar computações distribuĂdas
mais prĂłximas dos dispositivos que produzem e consomem os dados.
Contudo, gerir e supervisionar a execução desses dispositivos levanta obstáculos não
equacionados pela Computação na Nuvem, como a escala destes sistemas, ou a variabilidade
na conectividade e na capacidade de computação dos dispositivos que os compõem.
O estudo da literatura revela que ferramentas populares para gerir e supervisionar aplicações
e dispositivos possuem limitações para a sua escalabilidade, como por exemplo,
pontos de falha centralizados, ou requerem a configuração manual de cada dispositivo.
Nesta dissertação, propõem-se uma nova solução de monitorização e disseminação
de informação descentralizada. Esta solução oferece operações que permitem recolher
informação sobre o estado do sistema, de modo a ser utilizada por soluções (também
descentralizadas) que gerem aplicações especializadas para executar na periferia da rede.
A nossa solução foi avaliada em redes emuladas de várias dimensões com um máximo
de 750 nós, no contexto de disseminação e de monitorização de informação. Os nossos
resultados mostram que o nosso sistema consegue ser mais robusto ao mesmo tempo que
é mais escalável quando comparado com o estado da arte
Recommended from our members
An Emergent Architecture for Scaling Decentralized Communication Systems (DCS)
With recent technological advancements now accelerating the mobile and wireless Internet solution space, a ubiquitous computing Internet is well within the research and industrial community's design reach - a decentralized system design, which is not solely driven by static physical models and sound engineering principals, but more dynamically, perhaps sub-optimally at initial deployment and socially-influenced in its evolution. To complement today's Internet system, this thesis proposes a Decentralized Communication System (DCS) architecture with the following characteristics: flat physical topologies with numerous compute oriented and communication intensive nodes in the network with many of these nodes operating in multiple functional roles; self-organizing virtual structures formed through alternative mobility scenarios and capable of serving ad hoc networking formations; emergent operations and control with limited dependency on centralized control and management administration. Today, decentralized systems are not commercially scalable or viable for broad adoption in the same way we have to come to rely on the Internet or telephony systems. The premise in this thesis is that DCS can reach high levels of resilience, usefulness, scale that the industry has come to experience with traditional centralized systems by exploiting the following properties: (i.) network density and topological diversity; (ii.) self-organization and emergent attributes; (iii.) cooperative and dynamic infrastructure; and (iv.) node role diversity. This thesis delivers key contributions towards advancing the current state of the art in decentralized systems. First, we present the vision and a conceptual framework for DCS. Second, the thesis demonstrates that such a framework and concept architecture is feasible by prototyping a DCS platform that exhibits the above properties or minimally, demonstrates that these properties are feasible through prototyped network services. Third, this work expands on an alternative approach to network clustering using hierarchical virtual clusters (HVC) to facilitate self-organizing network structures. With increasing network complexity, decentralized systems can generally lead to unreliable and irregular service quality, especially given unpredictable node mobility and traffic dynamics. The HVC framework is an architectural strategy to address organizational disorder associated with traditional decentralized systems. The proposed HVC architecture along with the associated promotional methodology organizes distributed control and management services by leveraging alternative organizational models (e.g., peer-to-peer (P2P), centralized or tiered) in hierarchical and virtual fashion. Through simulation and analytical modeling, we demonstrate HVC efficiencies in DCS structural scalability and resilience by comparing static and dynamic HVC node configurations against traditional physical configurations based on P2P, centralized or tiered structures. Next, an emergent management architecture for DCS exploiting HVC for self-organization, introduces emergence as an operational approach to scaling DCS services for state management and policy control. In this thesis, emergence scales in hierarchical fashion using virtual clustering to create multiple tiers of local and global separation for aggregation, distribution and network control. Emergence is an architectural objective, which HVC introduces into the proposed self-management design for scaling and stability purposes. Since HVC expands the clustering model hierarchically and virtually, a clusterhead (CH) node, positioned as a proxy for a specific cluster or grouped DCS nodes, can also operate in a micro-capacity as a peer member of an organized cluster in a higher tier. As the HVC promotional process continues through the hierarchy, each tier of the hierarchy exhibits emergent behavior. With HVC as the self-organizing structural framework, a multi-tiered, emergent architecture enables the decentralized management strategy to improve scaling objectives that traditionally challenge decentralized systems. The HVC organizational concept and the emergence properties align with and the view of the human brain's neocortex layering structure of sensory storage, prediction and intelligence. It is the position in this thesis, that for DCS to scale and maintain broad stability, network control and management must strive towards an emergent or natural approach. While today's models for network control and management have proven to lack scalability and responsiveness based on pure centralized models, it is unlikely that singular organizational models can withstand the operational complexities associated with DCS. In this work, we integrate emergence and learning-based methods in a cooperative computing manner towards realizing DCS self-management. However, unlike many existing work in these areas which break down with increased network complexity and dynamics, the proposed HVC framework is utilized to offset these issues through effective separation, aggregation and asynchronous processing of both distributed state and policy. Using modeling techniques, we demonstrate that such architecture is feasible and can improve the operational robustness of DCS. The modeling emphasis focuses on demonstrating the operational advantages of an HVC-based organizational strategy for emergent management services (i.e., reachability, availability or performance). By integrating the two approaches, the DCS architecture forms a scalable system to address the challenges associated with traditional decentralized systems. The hypothesis is that the emergent management system architecture will improve the operational scaling properties of DCS-based applications and services. Additionally, we demonstrate structural flexibility of HVC as an underlying service infrastructure to build and deploy DCS applications and layered services. The modeling results demonstrate that an HVC-based emergent management and control system operationally outperforms traditional structural organizational models. In summary, this thesis brings together the above contributions towards delivering a scalable, decentralized system for Internet mobile computing and communications
A Survey on the Contributions of Software-Defined Networking to Traffic Engineering
Since the appearance of OpenFlow back in 2008, software-defined networking (SDN) has gained momentum. Although there are some discrepancies between the standards developing organizations working with SDN about what SDN is and how it is defined, they all outline traffic engineering (TE) as a key application. One of the most common objectives of TE is the congestion minimization, where techniques such as traffic splitting among multiple paths or advanced reservation systems are used. In such a scenario, this manuscript surveys the role of a comprehensive list of SDN protocols in TE solutions, in order to assess how these protocols can benefit TE. The SDN protocols have been categorized using the SDN architecture proposed by the open networking foundation, which differentiates among data-controller plane interfaces, application-controller plane interfaces, and management interfaces, in order to state how the interface type in which they operate influences TE. In addition, the impact of the SDN protocols on TE has been evaluated by comparing them with the path computation element (PCE)-based architecture. The PCE-based architecture has been selected to measure the impact of SDN on TE because it is the most novel TE architecture until the date, and because it already defines a set of metrics to measure the performance of TE solutions. We conclude that using the three types of interfaces simultaneously will result in more powerful and enhanced TE solutions, since they benefit TE in complementary ways.European Commission through the Horizon 2020 Research and Innovation Programme (GN4) under Grant 691567
Spanish Ministry of Economy and Competitiveness under the Secure Deployment of Services Over SDN and NFV-based Networks Project S&NSEC under Grant TEC2013-47960-C4-3-
Segment Routing: a Comprehensive Survey of Research Activities, Standardization Efforts and Implementation Results
Fixed and mobile telecom operators, enterprise network operators and cloud
providers strive to face the challenging demands coming from the evolution of
IP networks (e.g. huge bandwidth requirements, integration of billions of
devices and millions of services in the cloud). Proposed in the early 2010s,
Segment Routing (SR) architecture helps face these challenging demands, and it
is currently being adopted and deployed. SR architecture is based on the
concept of source routing and has interesting scalability properties, as it
dramatically reduces the amount of state information to be configured in the
core nodes to support complex services. SR architecture was first implemented
with the MPLS dataplane and then, quite recently, with the IPv6 dataplane
(SRv6). IPv6 SR architecture (SRv6) has been extended from the simple steering
of packets across nodes to a general network programming approach, making it
very suitable for use cases such as Service Function Chaining and Network
Function Virtualization. In this paper we present a tutorial and a
comprehensive survey on SR technology, analyzing standardization efforts,
patents, research activities and implementation results. We start with an
introduction on the motivations for Segment Routing and an overview of its
evolution and standardization. Then, we provide a tutorial on Segment Routing
technology, with a focus on the novel SRv6 solution. We discuss the
standardization efforts and the patents providing details on the most important
documents and mentioning other ongoing activities. We then thoroughly analyze
research activities according to a taxonomy. We have identified 8 main
categories during our analysis of the current state of play: Monitoring,
Traffic Engineering, Failure Recovery, Centrally Controlled Architectures, Path
Encoding, Network Programming, Performance Evaluation and Miscellaneous...Comment: SUBMITTED TO IEEE COMMUNICATIONS SURVEYS & TUTORIAL
Cost-Aware Resource Management for Decentralized Internet Services
Decentralized network services, such as naming systems, content
distribution networks, and publish-subscribe systems, play an
increasingly critical role and are required to provide high
performance, low latency service, achieve high availability in the
presence of network and node failures, and handle a large volume
of users. Judicious utilization of expensive system resources,
such as memory space, network bandwidth, and number of machines,
is fundamental to achieving the above properties. Yet, current
network services typically rely on less-informed, heuristic-based
techniques to manage scarce resources, and often fall short of
expectations.
This thesis presents a principled approach for building high
performance, robust, and scalable network services. The key
contribution of this thesis is to show that resolving the
fundamental cost-benefit tradeoff between resource consumption and
performance through mathematical optimization is practical in
large-scale distributed systems, and enables decentralized network
services to meet efficiently system-wide performance goals. This
thesis presents a practical approach for resource management in
three stages: analytically model the cost-benefit tradeoff as a
constrained optimization problem, determine a near-optimal
resource allocation strategy on the fly, and enforce the derived
strategy through light-weight, decentralized mechanisms. It
builds on self-organizing structured overlays, which provide
failure resilience and scalability, and complements them with
stronger performance guarantees and robustness under sudden
changes in workload. This work enables applications to meet
system-wide performance targets, such as low average response
times, high cache hit rates, and small update dissemination times
with low resource consumption. Alternatively, applications can
make the maximum use of available resources, such as storage and
bandwidth, and derive large gains in performance.
I have implemented an extensible framework called Honeycomb to
perform cost-aware resource management on structured overlays
based on the above approach and built three critical network
services using it. These services consist of a new name system for
the Internet called CoDoNS that distributes data associated with
domain names, an open-access content distribution network called
CobWeb that caches web content for faster access by users, and an
online information monitoring system called Corona that notifies
users about changes to web pages. Simulations and performance
measurements from a planetary-scale deployment show that these
services provide unprecedented performance improvement over the
current state of the art
Privacy-Friendly Collaboration for Cyber Threat Mitigation
Sharing of security data across organizational boundaries has often been
advocated as a promising way to enhance cyber threat mitigation. However,
collaborative security faces a number of important challenges, including
privacy, trust, and liability concerns with the potential disclosure of
sensitive data. In this paper, we focus on data sharing for predictive
blacklisting, i.e., forecasting attack sources based on past attack
information. We propose a novel privacy-enhanced data sharing approach in which
organizations estimate collaboration benefits without disclosing their
datasets, organize into coalitions of allied organizations, and securely share
data within these coalitions. We study how different partner selection
strategies affect prediction accuracy by experimenting on a real-world dataset
of 2 billion IP addresses and observe up to a 105% prediction improvement.Comment: This paper has been withdrawn as it has been superseded by
arXiv:1502.0533
- …