7,719 research outputs found
Recommended from our members
Multimedia delivery in the future internet
The term “Networked Media” implies that all kinds of media including text, image, 3D graphics, audio
and video are produced, distributed, shared, managed and consumed on-line through various networks,
like the Internet, Fiber, WiFi, WiMAX, GPRS, 3G and so on, in a convergent manner [1]. This white
paper is the contribution of the Media Delivery Platform (MDP) cluster and aims to cover the Networked
challenges of the Networked Media in the transition to the Future of the Internet.
Internet has evolved and changed the way we work and live. End users of the Internet have been confronted
with a bewildering range of media, services and applications and of technological innovations concerning
media formats, wireless networks, terminal types and capabilities. And there is little evidence that the pace
of this innovation is slowing. Today, over one billion of users access the Internet on regular basis, more
than 100 million users have downloaded at least one (multi)media file and over 47 millions of them do so
regularly, searching in more than 160 Exabytes1 of content. In the near future these numbers are expected
to exponentially rise. It is expected that the Internet content will be increased by at least a factor of 6, rising
to more than 990 Exabytes before 2012, fuelled mainly by the users themselves. Moreover, it is envisaged
that in a near- to mid-term future, the Internet will provide the means to share and distribute (new)
multimedia content and services with superior quality and striking flexibility, in a trusted and personalized
way, improving citizens’ quality of life, working conditions, edutainment and safety.
In this evolving environment, new transport protocols, new multimedia encoding schemes, cross-layer inthe
network adaptation, machine-to-machine communication (including RFIDs), rich 3D content as well as
community networks and the use of peer-to-peer (P2P) overlays are expected to generate new models of
interaction and cooperation, and be able to support enhanced perceived quality-of-experience (PQoE) and
innovative applications “on the move”, like virtual collaboration environments, personalised services/
media, virtual sport groups, on-line gaming, edutainment. In this context, the interaction with content
combined with interactive/multimedia search capabilities across distributed repositories, opportunistic P2P
networks and the dynamic adaptation to the characteristics of diverse mobile terminals are expected to
contribute towards such a vision.
Based on work that has taken place in a number of EC co-funded projects, in Framework Program 6 (FP6)
and Framework Program 7 (FP7), a group of experts and technology visionaries have voluntarily
contributed in this white paper aiming to describe the status, the state-of-the art, the challenges and the way
ahead in the area of Content Aware media delivery platforms
Energy challenges for ICT
The energy consumption from the expanding use of information and communications technology (ICT) is unsustainable with present drivers, and it will impact heavily on the future climate change. However, ICT devices have the potential to contribute signi - cantly to the reduction of CO2 emission and enhance resource e ciency in other sectors, e.g., transportation (through intelligent transportation and advanced driver assistance systems and self-driving vehicles), heating (through smart building control), and manu- facturing (through digital automation based on smart autonomous sensors). To address the energy sustainability of ICT and capture the full potential of ICT in resource e - ciency, a multidisciplinary ICT-energy community needs to be brought together cover- ing devices, microarchitectures, ultra large-scale integration (ULSI), high-performance computing (HPC), energy harvesting, energy storage, system design, embedded sys- tems, e cient electronics, static analysis, and computation. In this chapter, we introduce challenges and opportunities in this emerging eld and a common framework to strive towards energy-sustainable ICT
A Survey on the Contributions of Software-Defined Networking to Traffic Engineering
Since the appearance of OpenFlow back in 2008, software-defined networking (SDN) has gained momentum. Although there are some discrepancies between the standards developing organizations working with SDN about what SDN is and how it is defined, they all outline traffic engineering (TE) as a key application. One of the most common objectives of TE is the congestion minimization, where techniques such as traffic splitting among multiple paths or advanced reservation systems are used. In such a scenario, this manuscript surveys the role of a comprehensive list of SDN protocols in TE solutions, in order to assess how these protocols can benefit TE. The SDN protocols have been categorized using the SDN architecture proposed by the open networking foundation, which differentiates among data-controller plane interfaces, application-controller plane interfaces, and management interfaces, in order to state how the interface type in which they operate influences TE. In addition, the impact of the SDN protocols on TE has been evaluated by comparing them with the path computation element (PCE)-based architecture. The PCE-based architecture has been selected to measure the impact of SDN on TE because it is the most novel TE architecture until the date, and because it already defines a set of metrics to measure the performance of TE solutions. We conclude that using the three types of interfaces simultaneously will result in more powerful and enhanced TE solutions, since they benefit TE in complementary ways.European Commission through the Horizon 2020 Research and Innovation Programme (GN4) under Grant 691567
Spanish Ministry of Economy and Competitiveness under the Secure Deployment of Services Over SDN and NFV-based Networks Project S&NSEC under Grant TEC2013-47960-C4-3-
A cluster-based decentralized job dispatching for the large-scale cloud.
The remarkable development of cloud computing in the past few years, and its proven ability to handle web hosting
workloads, is prompting researchers to investigate whether clouds are suitable to run large-scale computations. Cloud
load balancing is one of the solution to provide reliable and scalable cloud services. Especially, load balancing for the
multimedia streaming requires dynamic and real-time load balancing strategies. With this context, this paper aims to
propose an Inter Cloud Manager (ICM) job dispatching algorithm for the large-scale cloud environment. ICM mainly
performs two tasks: clustering (neighboring) and decision-making. For clustering, ICM uses Hello packets that observe
and collect data from its neighbor nodes, and decision-making is based on both the measured execution time and
network delay in forwarding the jobs and receiving the result of the execution. We then run experiments on a
large-scale laboratory test-bed to evaluate the performance of ICM, and compare it with well-known decentralized
algorithms such as Ant Colony, Workload and Client Aware Policy (WCAP), and the Honey-Bee Foraging Algorithm
(HFA). Measurements focus in particular on the observed total average response time including network delay in
congested environments. The experimental results show that for most cases, ICM is better at avoiding system
saturation under the heavy load.N/
Global-Scale Resource Survey and Performance Monitoring of Public OGC Web Map Services
One of the most widely-implemented service standards provided by the Open
Geospatial Consortium (OGC) to the user community is the Web Map Service (WMS).
WMS is widely employed globally, but there is limited knowledge of the global
distribution, adoption status or the service quality of these online WMS
resources. To fill this void, we investigated global WMSs resources and
performed distributed performance monitoring of these services. This paper
explicates a distributed monitoring framework that was used to monitor 46,296
WMSs continuously for over one year and a crawling method to discover these
WMSs. We analyzed server locations, provider types, themes, the spatiotemporal
coverage of map layers and the service versions for 41,703 valid WMSs.
Furthermore, we appraised the stability and performance of basic operations for
1210 selected WMSs (i.e., GetCapabilities and GetMap). We discuss the major
reasons for request errors and performance issues, as well as the relationship
between service response times and the spatiotemporal distribution of client
monitoring sites. This paper will help service providers, end users and
developers of standards to grasp the status of global WMS resources, as well as
to understand the adoption status of OGC standards. The conclusions drawn in
this paper can benefit geospatial resource discovery, service performance
evaluation and guide service performance improvements.Comment: 24 pages; 15 figure
Detection of Fog Network Data Telemetry Using Data Plane Programming
Fog computing has been introduced to deliver Cloud-based services to the Internet of Things (IoT) devices. It locates geographically closer to IoT devices than Cloud networks and aims at offering latency-critical computation and storage to end-user applications. To leverage Fog computing for computational offloading from end-users, it is important to optimize resources in the Fog nodes dynamically. Provisioning requires knowledge of the current network state, thus, monitoring mechanisms play a significant role to conduct resource management in the network. To keep track of the state of devices, we use P4, a data-plane programming language, to describe data-plane abstraction of Fog network devices and collect telemetry without the intervention of the control plane or adding a big amount of overhead. In this paper, we propose a software-defined architecture with a programmable data plane for data telemetry detection that can be integrated into Fog network resource management. After the implementation of detecting data telemetry based on In-Band Network Telemetry (INT) within a Mininet simulation, we show the available features and preliminary Fog resource management based on the collected data telemetry and future telemetry-based traffic engineering possibilities
Software model refactoring based on performance analysis: better working on software or performance side?
Several approaches have been introduced in the last few years to tackle the
problem of interpreting model-based performance analysis results and
translating them into architectural feedback. Typically the interpretation can
take place by browsing either the software model or the performance model. In
this paper, we compare two approaches that we have recently introduced for this
goal: one based on the detection and solution of performance antipatterns, and
another one based on bidirectional model transformations between software and
performance models. We apply both approaches to the same example in order to
illustrate the differences in the obtained performance results. Thereafter, we
raise the level of abstraction and we discuss the pros and cons of working on
the software side and on the performance side.Comment: In Proceedings FESCA 2013, arXiv:1302.478
- …