237 research outputs found
Multimedia
The nowadays ubiquitous and effortless digital data capture and processing capabilities offered by the majority of devices, lead to an unprecedented penetration of multimedia content in our everyday life. To make the most of this phenomenon, the rapidly increasing volume and usage of digitised content requires constant re-evaluation and adaptation of multimedia methodologies, in order to meet the relentless change of requirements from both the user and system perspectives. Advances in Multimedia provides readers with an overview of the ever-growing field of multimedia by bringing together various research studies and surveys from different subfields that point out such important aspects. Some of the main topics that this book deals with include: multimedia management in peer-to-peer structures & wireless networks, security characteristics in multimedia, semantic gap bridging for multimedia content and novel multimedia applications
Recommended from our members
Multimedia delivery in the future internet
The term âNetworked Mediaâ implies that all kinds of media including text, image, 3D graphics, audio
and video are produced, distributed, shared, managed and consumed on-line through various networks,
like the Internet, Fiber, WiFi, WiMAX, GPRS, 3G and so on, in a convergent manner [1]. This white
paper is the contribution of the Media Delivery Platform (MDP) cluster and aims to cover the Networked
challenges of the Networked Media in the transition to the Future of the Internet.
Internet has evolved and changed the way we work and live. End users of the Internet have been confronted
with a bewildering range of media, services and applications and of technological innovations concerning
media formats, wireless networks, terminal types and capabilities. And there is little evidence that the pace
of this innovation is slowing. Today, over one billion of users access the Internet on regular basis, more
than 100 million users have downloaded at least one (multi)media file and over 47 millions of them do so
regularly, searching in more than 160 Exabytes1 of content. In the near future these numbers are expected
to exponentially rise. It is expected that the Internet content will be increased by at least a factor of 6, rising
to more than 990 Exabytes before 2012, fuelled mainly by the users themselves. Moreover, it is envisaged
that in a near- to mid-term future, the Internet will provide the means to share and distribute (new)
multimedia content and services with superior quality and striking flexibility, in a trusted and personalized
way, improving citizensâ quality of life, working conditions, edutainment and safety.
In this evolving environment, new transport protocols, new multimedia encoding schemes, cross-layer inthe
network adaptation, machine-to-machine communication (including RFIDs), rich 3D content as well as
community networks and the use of peer-to-peer (P2P) overlays are expected to generate new models of
interaction and cooperation, and be able to support enhanced perceived quality-of-experience (PQoE) and
innovative applications âon the moveâ, like virtual collaboration environments, personalised services/
media, virtual sport groups, on-line gaming, edutainment. In this context, the interaction with content
combined with interactive/multimedia search capabilities across distributed repositories, opportunistic P2P
networks and the dynamic adaptation to the characteristics of diverse mobile terminals are expected to
contribute towards such a vision.
Based on work that has taken place in a number of EC co-funded projects, in Framework Program 6 (FP6)
and Framework Program 7 (FP7), a group of experts and technology visionaries have voluntarily
contributed in this white paper aiming to describe the status, the state-of-the art, the challenges and the way
ahead in the area of Content Aware media delivery platforms
Quality of Service Controlled Multimedia Transport Protocol
PhDThis research looks at the design of an open transport protocol that supports a range of
services including multimedia over low data-rate networks. Low data-rate multimedia
applications require a system that provides quality of service (QoS) assurance and flexibility.
One promising field is the area of content-based coding. Content-based systems use an array
of protocols to select the optimum set of coding algorithms. A content-based transport
protocol integrates a content-based application to a transmission network.
General transport protocols form a bottleneck in low data-rate multimedia
communicationbsy limiting throughpuot r by not maintainingt iming requirementsT. his work
presents an original model of a transport protocol that eliminates the bottleneck by
introducing a flexible yet efficient algorithm that uses an open approach to flexibility and
holistic architectureto promoteQ oS.T he flexibility andt ransparenccyo mesi n the form of a
fixed syntaxt hat providesa seto f transportp rotocols emanticsT. he mediaQ oSi s maintained
by defining a generic descriptor. Overall, the structure of the protocol is based on a single
adaptablea lgorithm that supportsa pplication independencen, etwork independencea nd
quality of service.
The transportp rotocol was evaluatedth rougha set of assessmentos:f f-line; off-line
for a specific application; and on-line for a specific application. Application contexts used
MPEG-4 test material where the on-line assessmenuts eda modified MPEG-4 pl; yer. The
performanceo f the QoSc ontrolledt ransportp rotocoli s often bettert hano thers chemews hen
appropriateQ oS controlledm anagemenatl gorithmsa re selectedT. his is shownf irst for an
off-line assessmenwt here the performancei s compared between the QoS controlled
multiplexer,a n emulatedM PEG-4F lexMux multiplexers chemea, ndt he targetr equirements.
The performanceis also shownt o be better in a real environmentw hen the QoS controlled
multiplexeri s comparedw ith the real MPEG-4F lexMux scheme
Service-oriented models for audiovisual content storage
What are the important topics to understand if involved with storage services to hold digital audiovisual content? This report takes a look at how content is created and moves into and out of storage; the storage service value networks and architectures found now and expected in the future; what sort of data transfer is expected to and from an audiovisual archive; what transfer protocols to use; and a summary of security and interface issues
An intelligent approach to quality of service for MPEG-4 video transmission in IEEE 802.15.1
Nowadays, wireless connectivity is becoming ubiquitous spreading to companies and in domestic areas. IEEE 802.15.1 commonly known as Bluetooth is high-quality, high-security, high-speed and low-cost radio signal technology. This wireless technology allows a maximum access range of 100 meters yet needs power as low as 1mW. Regrettably, IEEE 802.15.1 has a very limited bandwidth. This limitation can become a real problem If the user wishes to transmit a large amount of data in a very short time. The version 1.2 which is used in this project could only carry a maximum download rate of 724Kbps and an upload rate of 54Kbps In its asynchronous mode. But video needs a very large bandwidth to be transmitted with a sufficient level of quality. Video transmission over IEEE 802.15.1 networks would therefore be difficult to achieve, due to the limited bandwidth. Hence, a solution to transmit digital video with a sufficient quality of picture to arrive at the receiving end is required. A hybrid scheme has been developed in this thesis, comprises of a fuzzy logic set of rules and an artificial neural network algorithms. MPEG-4 video compression has been used in this work to optimise the transmission. This research further utilises an âadded-bufferâ to prevent excessive data loss of MPEG-4 video over IEEE 802.15.1transmission and subsequently increase picture quality. The neural-fuzzy scheme regulates the output rate of the added-buffer to ensure that MPEG-4 video stream conforms to the traffic conditions of the IEEE 802.15.1 channel during the transmission period, that is to send more data when the bandwidth is not fully used and keep the data in the buffers if the bandwidth is overused. Computer simulation results confirm that intelligence techniques and added-buffer do improve quality of picture, reduce data loss and communication delay, as compared with conventional MPEG video transmission over IEEE 802.15.1
AXMEDIS 2008
The AXMEDIS International Conference series aims to explore all subjects and topics related to cross-media and digital-media content production, processing, management, standards, representation, sharing, protection and rights management, to address the latest developments and future trends of the technologies and their applications, impacts and exploitation. The AXMEDIS events offer venues for exchanging concepts, requirements, prototypes, research ideas, and findings which could contribute to academic research and also benefit business and industrial communities. In the Internet as well as in the digital era, cross-media production and distribution represent key developments and innovations that are fostered by emergent technologies to ensure better value for money while optimising productivity and market coverage
Runtime Adaptation of Scientific Service Workflows
Software landscapes are rather subject to change than being complete after having been built. Changes may be caused by a modified customer behavior, the shift to new hardware resources, or otherwise changed requirements. In such situations, several challenges arise. New architectural models have to be designed and implemented, existing software has to be integrated, and, finally, the new software has to be deployed, monitored, and, where appropriate, optimized during runtime under realistic usage scenarios. All of these situations often demand manual intervention, which causes them to be error-prone.
This thesis addresses these types of runtime adaptation. Based on service-oriented architectures, an environment is developed that enables the integration of existing software (i.e., the wrapping of legacy software as web services). A workflow modeling tool that aims at an easy-to-use approach by separating the role of the workflow expert and the role of the domain expert. After the development of workflows, tools that observe the executing infrastructure and perform automatic scale-in and scale-out operations are presented. Infrastructure-as-a-Service providers are used to scale the infrastructure in a transparent and cost-efficient way. The deployment of necessary middleware tools is automatically done.
The use of a distributed infrastructure can lead to communication problems. In order to keep workflows robust, these exceptional cases need to treated. But, in this way, the process logic of a workflow gets mixed up and bloated with infrastructural details, which yields an increase in its complexity. In this work, a module is presented that can deal automatically with infrastructural faults and that thereby allows to keep the separation of these two layers.
When services or their components are hosted in a distributed environment, some requirements need to be addressed at each service separately. Although techniques as object-oriented programming or the usage of design patterns like the interceptor pattern ease the adaptation of service behavior or structures. Still, these methods require to modify the configuration or the implementation of each individual service. On the other side, aspect-oriented programming allows to weave functionality into existing code even without having its source. Since the functionality needs to be woven into the code, it depends on the specific implementation. In a service-oriented architecture, where the implementation of a service is unknown, this approach clearly has its limitations. The request/response aspects presented in this thesis overcome this obstacle and provide a SOA-compliant and new methods to weave functionality into the communication layer of web services.
The main contributions of this thesis are the following:
Shifting towards a service-oriented architecture: The generic and extensible Legacy Code Description Language and the corresponding framework allow to wrap existing software, e.g., as web services, which afterwards can be composed into a workflow by SimpleBPEL without overburdening the domain expert with technical details that are indeed handled by a workflow expert.
Runtime adaption: Based on the standardized Business Process Execution Language an automatic scheduling approach is presented that monitors all used resources and is able to automatically provision new machines in case a scale-out becomes necessary. If the resource's load drops, e.g., because of less workflow executions, a scale-in is also automatically performed. The scheduling algorithm takes the data transfer between the services into account in order to prevent scheduling allocations that eventually increase the workflow's makespan due to unnecessary or disadvantageous data transfers. Furthermore, a multi-objective scheduling algorithm that is based on a genetic algorithm is able to additionally consider cost, in a way that a user can define her own preferences rising from optimized execution times of a workflow and minimized costs. Possible communication errors are automatically detected and, according to certain constraints, corrected.
Adaptation of communication: The presented request/response aspects allow to weave functionality into the communication of web services. By defining a pointcut language that only relies on the exchanged documents, the implementation of services must neither be known nor be available. The weaving process itself is modeled using web services. In this way, the concept of request/response aspects is naturally embedded into a service-oriented architecture
Application Adaptive Bandwidth Management Using Real-Time Network Monitoring.
Application adaptive bandwidth management is a strategy for ensuring secure and reliable network operation in the presence of undesirable applications competing for a networkâs crucial bandwidth, covert channels of communication via non-standard traffic on well-known ports, and coordinated Denial of Service attacks. The study undertaken here explored the classification, analysis and management of the network traffic on the basis of ports and protocols used, type of applications, traffic direction and flow rates on the East Tennessee State Universityâs campus-wide network. Bandwidth measurements over a nine-month period indicated bandwidth abuse of less than 0.0001% of total network bandwidth. The conclusion suggests the use of the defense-in-depth approach in conjunction with the KHYATI (Knowledge, Host hardening, Yauld monitoring, Analysis, Tools and Implementation) paradigm to ensure effective information assurance
- âŠ