2,697 research outputs found
Quality of service assurance for the next generation Internet
The provisioning for multimedia applications has been of increasing interest among researchers and Internet Service Providers. Through the migration from resource-based to service-driven networks, it has become evident that the Internet model should be enhanced to provide support for a variety of differentiated services that match applications and customer requirements, and not stay limited under the flat best-effort service that is currently provided.
In this paper, we describe and critically appraise the major achievements of the efforts to introduce Quality of Service (QoS) assurance and provisioning within the Internet model. We then propose a research path for the creation of a network services management architecture,
through which we can move towards a QoS-enabled network environment, offering support for a variety of different services, based on traffic characteristics and user expectations
Lessons learned from the design of a mobile multimedia system in the Moby Dick project
Recent advances in wireless networking technology and the exponential development of semiconductor technology have engendered a new paradigm of computing, called personal mobile computing or ubiquitous computing. This offers a vision of the future with a much richer and more exciting set of architecture research challenges than extrapolations of the current desktop architectures. In particular, these devices will have limited battery resources, will handle diverse data types, and will operate in environments that are insecure, dynamic and which vary significantly in time and location. The research performed in the MOBY DICK project is about designing such a mobile multimedia system. This paper discusses the approach made in the MOBY DICK project to solve some of these problems, discusses its contributions, and accesses what was learned from the project
End-Point Resource Admission Control for Remote Control Multimedia Applications
One goal in certain classes of networked multimedia applications, such as full-feedback remote control, is to provide end-to-end guarantees. To achieve guarantees, all resources along the path(s) between the resource(s) and sink(s) must be controlled. Resource availability is checked by the admission service during the call establishment phase. Current admission services control only network resources such as bandwidth and network delay. To provide end-to-end guarantees, the networked applications also need operation system resources and I/O devices at the endpoints. All such resources must be included in a robust admission process. By integrating the end-point resources, we observed several dependencies which force changes in admission algorithms designed and implemented for control of a single resource. We have designed and implemented the multi-level admission service within our Omega architecture which controls the availability of end-point resources needed in remote control multimedia applications such as telerobotics
Resource Management in Multimedia Networked Systems
Error-free multimedia data processing and communication includes providing guaranteed services such as the colloquial telephone. A set of problems have to be solved and handled in the control-management level of the host and underlying network architectures. We discuss in this paper \u27resource management\u27 at the host and network level, and their cooperation to achieve global guaranteed transmission and presentation services, which means end-to-end guarantees. The emphasize is on \u27network resources\u27 (e.g., bandwidth, buffer space) and \u27host resources\u27 (e.g., CPU processing time) which need to be controlled in order to satisfy the Quality of Service (QoS) requirements set by the users of the multimedia networked system. The control of the specified resources involves three actions: (1) properly allocate resources (end-to-end) during the multimedia call establishment, so that traffic can flow according to the QoS specification; (2) control resource allocation during the multimedia transmission; (3) adapt to changes when degradation of system components occurs. These actions imply the necessity of: (a) new services, such as admission services, at the hosts and intermediate network nodes; (b) new protocols for establishing connections which satisfy QoS requirements along the path from send to receiver(s), such as resource reservation protocol; (c) new control algorithms for delay, rate and error control; (d) new resource monitoring protocols for reporting system changes, such as resource administration protocol; (e) new adaptive schemes for dynamic resource allocation to respond to system changes; and (f) new architectures at the hosts and switches to accommodate the resource management entities. This article gives an overview of services, mechanisms and protocols for resource management as outlined above
Recommended from our members
Multimedia delivery in the future internet
The term “Networked Media” implies that all kinds of media including text, image, 3D graphics, audio
and video are produced, distributed, shared, managed and consumed on-line through various networks,
like the Internet, Fiber, WiFi, WiMAX, GPRS, 3G and so on, in a convergent manner [1]. This white
paper is the contribution of the Media Delivery Platform (MDP) cluster and aims to cover the Networked
challenges of the Networked Media in the transition to the Future of the Internet.
Internet has evolved and changed the way we work and live. End users of the Internet have been confronted
with a bewildering range of media, services and applications and of technological innovations concerning
media formats, wireless networks, terminal types and capabilities. And there is little evidence that the pace
of this innovation is slowing. Today, over one billion of users access the Internet on regular basis, more
than 100 million users have downloaded at least one (multi)media file and over 47 millions of them do so
regularly, searching in more than 160 Exabytes1 of content. In the near future these numbers are expected
to exponentially rise. It is expected that the Internet content will be increased by at least a factor of 6, rising
to more than 990 Exabytes before 2012, fuelled mainly by the users themselves. Moreover, it is envisaged
that in a near- to mid-term future, the Internet will provide the means to share and distribute (new)
multimedia content and services with superior quality and striking flexibility, in a trusted and personalized
way, improving citizens’ quality of life, working conditions, edutainment and safety.
In this evolving environment, new transport protocols, new multimedia encoding schemes, cross-layer inthe
network adaptation, machine-to-machine communication (including RFIDs), rich 3D content as well as
community networks and the use of peer-to-peer (P2P) overlays are expected to generate new models of
interaction and cooperation, and be able to support enhanced perceived quality-of-experience (PQoE) and
innovative applications “on the move”, like virtual collaboration environments, personalised services/
media, virtual sport groups, on-line gaming, edutainment. In this context, the interaction with content
combined with interactive/multimedia search capabilities across distributed repositories, opportunistic P2P
networks and the dynamic adaptation to the characteristics of diverse mobile terminals are expected to
contribute towards such a vision.
Based on work that has taken place in a number of EC co-funded projects, in Framework Program 6 (FP6)
and Framework Program 7 (FP7), a group of experts and technology visionaries have voluntarily
contributed in this white paper aiming to describe the status, the state-of-the art, the challenges and the way
ahead in the area of Content Aware media delivery platforms
End-to-end QoE optimization through overlay network deployment
In this paper an overlay network for end-to-end QoE management is presented. The goal of this infrastructure is QoE optimization by routing around failures in the IP network and optimizing the bandwidth usage on the last mile to the client. The overlay network consists of components that are located both in the core and at the edge of the network. A number of overlay servers perform end-to-end QoS monitoring and maintain an overlay topology, allowing them to route around link failures and congestion. Overlay access components situated at the edge of the network are responsible for determining whether packets are sent to the overlay network, while proxy components manage the bandwidth on the last mile. This paper gives a detailed overview of the end-to-end architecture together with representative experimental results which comprehensively demonstrate the overlay network's ability to optimize the QoE
Congestion Control for Network-Aware Telehaptic Communication
Telehaptic applications involve delay-sensitive multimedia communication
between remote locations with distinct Quality of Service (QoS) requirements
for different media components. These QoS constraints pose a variety of
challenges, especially when the communication occurs over a shared network,
with unknown and time-varying cross-traffic. In this work, we propose a
transport layer congestion control protocol for telehaptic applications
operating over shared networks, termed as dynamic packetization module (DPM).
DPM is a lossless, network-aware protocol which tunes the telehaptic
packetization rate based on the level of congestion in the network. To monitor
the network congestion, we devise a novel network feedback module, which
communicates the end-to-end delays encountered by the telehaptic packets to the
respective transmitters with negligible overhead. Via extensive simulations, we
show that DPM meets the QoS requirements of telehaptic applications over a wide
range of network cross-traffic conditions. We also report qualitative results
of a real-time telepottery experiment with several human subjects, which reveal
that DPM preserves the quality of telehaptic activity even under heavily
congested network scenarios. Finally, we compare the performance of DPM with
several previously proposed telehaptic communication protocols and demonstrate
that DPM outperforms these protocols.Comment: 25 pages, 19 figure
Network Service Customization: End-Point Perspective (Proposal)
An important problem with cell-switched technologies such as Asynchronous Transfer Mode (ATM) is the provision of customized multiplexing behavior to applications. This customization takes the form of setting up processes in the network and end-points to meet application Quality of Service (QoS) requirements.
The proposed thesis work examines the necessary components of a software architecture to provide QoS in the end-points of a cell-switched network. An architecture has been developed, and the thesis work will refine it using a driving application of the full-feedback teleoperation of a robotics system.
Preliminary experimental results indicate that such teleoperation is possible using general-purpose workstations and a lightly-loaded ATM link. An important result of the experimental portion of the thesis work will be a study of the domain of applicability for various resource management techniques
Analysis domain model for shared virtual environments
The field of shared virtual environments, which also
encompasses online games and social 3D environments, has a
system landscape consisting of multiple solutions that share great functional overlap. However, there is little system interoperability between the different solutions. A shared virtual environment has an associated problem domain that is highly complex raising difficult challenges to the development process, starting with the architectural design of the underlying system. This paper has two main contributions. The first contribution is a broad domain analysis of shared virtual environments, which enables developers to have a better understanding of the whole rather than the part(s). The second contribution is a reference domain model for discussing and describing solutions - the Analysis Domain Model
- …