2,386 research outputs found
Recommended from our members
Multimedia delivery in the future internet
The term âNetworked Mediaâ implies that all kinds of media including text, image, 3D graphics, audio
and video are produced, distributed, shared, managed and consumed on-line through various networks,
like the Internet, Fiber, WiFi, WiMAX, GPRS, 3G and so on, in a convergent manner [1]. This white
paper is the contribution of the Media Delivery Platform (MDP) cluster and aims to cover the Networked
challenges of the Networked Media in the transition to the Future of the Internet.
Internet has evolved and changed the way we work and live. End users of the Internet have been confronted
with a bewildering range of media, services and applications and of technological innovations concerning
media formats, wireless networks, terminal types and capabilities. And there is little evidence that the pace
of this innovation is slowing. Today, over one billion of users access the Internet on regular basis, more
than 100 million users have downloaded at least one (multi)media file and over 47 millions of them do so
regularly, searching in more than 160 Exabytes1 of content. In the near future these numbers are expected
to exponentially rise. It is expected that the Internet content will be increased by at least a factor of 6, rising
to more than 990 Exabytes before 2012, fuelled mainly by the users themselves. Moreover, it is envisaged
that in a near- to mid-term future, the Internet will provide the means to share and distribute (new)
multimedia content and services with superior quality and striking flexibility, in a trusted and personalized
way, improving citizensâ quality of life, working conditions, edutainment and safety.
In this evolving environment, new transport protocols, new multimedia encoding schemes, cross-layer inthe
network adaptation, machine-to-machine communication (including RFIDs), rich 3D content as well as
community networks and the use of peer-to-peer (P2P) overlays are expected to generate new models of
interaction and cooperation, and be able to support enhanced perceived quality-of-experience (PQoE) and
innovative applications âon the moveâ, like virtual collaboration environments, personalised services/
media, virtual sport groups, on-line gaming, edutainment. In this context, the interaction with content
combined with interactive/multimedia search capabilities across distributed repositories, opportunistic P2P
networks and the dynamic adaptation to the characteristics of diverse mobile terminals are expected to
contribute towards such a vision.
Based on work that has taken place in a number of EC co-funded projects, in Framework Program 6 (FP6)
and Framework Program 7 (FP7), a group of experts and technology visionaries have voluntarily
contributed in this white paper aiming to describe the status, the state-of-the art, the challenges and the way
ahead in the area of Content Aware media delivery platforms
Architectural support for ubiquitous access to multimedia content
Tese de doutoramento. Engenharia Electrotécnica e de Computadores (TelecomunicaçÔes). Faculdade de Engenharia. Universidade do Porto. 200
Blending of Images Using Discrete Wavelet Transform
The project presents multi focus image fusion using discrete wavelet transform with local directional pattern and spatial frequency analysis. Multi focus image fusion in wireless visual sensor networks is a process of blending two or more images to get a new one which has a more accurate description of the scene than the individual source images. In this project, the proposed model utilizes the multi scale decomposition done by discrete wavelet transform for fusing the images in its frequency domain. It decomposes an image into two different components like structural and textural information. It doesnât down sample the image while transforming into frequency domain. So it preserves the edge texture details while reconstructing image from its frequency domain. It is used to reduce the problems like blocking, ringing artifacts occurs because of DCT and DWT. The low frequency sub-band coefficients are fused by selecting coefficient having maximum spatial frequency. It indicates the overall active level of an image. The high frequency sub-band coefficients are fused by selecting coefficients having maximum LDP code value LDP computes the edge response values in all eight directions at each pixel position and generates a code from the relative strength magnitude. Finally, fused two different frequency sub-bands are inverse transformed to reconstruct fused image. The system performance will be evaluated by using the parameters such as Peak signal to noise ratio, correlation and entrop
In-Network View Synthesis for Interactive Multiview Video Systems
To enable Interactive multiview video systems with a minimum view-switching
delay, multiple camera views are sent to the users, which are used as reference
images to synthesize additional virtual views via depth-image-based rendering.
In practice, bandwidth constraints may however restrict the number of reference
views sent to clients per time unit, which may in turn limit the quality of the
synthesized viewpoints. We argue that the reference view selection should
ideally be performed close to the users, and we study the problem of in-network
reference view synthesis such that the navigation quality is maximized at the
clients. We consider a distributed cloud network architecture where data stored
in a main cloud is delivered to end users with the help of cloudlets, i.e.,
resource-rich proxies close to the users. In order to satisfy last-hop
bandwidth constraints from the cloudlet to the users, a cloudlet re-samples
viewpoints of the 3D scene into a discrete set of views (combination of
received camera views and virtual views synthesized) to be used as reference
for the synthesis of additional virtual views at the client. This in-network
synthesis leads to better viewpoint sampling given a bandwidth constraint
compared to simple selection of camera views, but it may however carry a
distortion penalty in the cloudlet-synthesized reference views. We therefore
cast a new reference view selection problem where the best subset of views is
defined as the one minimizing the distortion over a view navigation window
defined by the user under some transmission bandwidth constraints. We show that
the view selection problem is NP-hard, and propose an effective polynomial time
algorithm using dynamic programming to solve the optimization problem.
Simulation results finally confirm the performance gain offered by virtual view
synthesis in the network
Adaptive multibeam antennas for spacelab. Phase A: Feasibility study
The feasibility was studied of using adaptive multibeam multi-frequency antennas on the spacelab, and to define the experiment configuration and program plan needed for a demonstration to prove the concept. Three applications missions were selected, and requirements were defined for an L band communications experiment, an L band radiometer experiment, and a Ku band communications experiment. Reflector, passive lens, and phased array antenna systems were considered, and the Adaptive Multibeam Phased Array (AMPA) was chosen. Array configuration and beamforming network tradeoffs resulted in a single 3m x 3m L band array with 576 elements for high radiometer beam efficiency. Separate 0.4m x 0.4 m arrays are used to transmit and receive at Ku band with either 576 elements or thinned apertures. Each array has two independently steerable 5 deg beams, which are adaptively controlled
Synthetic Aperture Radar (SAR) data processing
The available and optimal methods for generating SAR imagery for NASA applications were identified. The SAR image quality and data processing requirements associated with these applications were studied. Mathematical operations and algorithms required to process sensor data into SAR imagery were defined. The architecture of SAR image formation processors was discussed, and technology necessary to implement the SAR data processors used in both general purpose and dedicated imaging systems was addressed
System configuration and executive requirements specifications for reusable shuttle and space station/base
System configuration and executive requirements specifications for reusable shuttle and space station/bas
Study of information transfer optimization for communication satellites
The results are presented of a study of source coding, modulation/channel coding, and systems techniques for application to teleconferencing over high data rate digital communication satellite links. Simultaneous transmission of video, voice, data, and/or graphics is possible in various teleconferencing modes and one-way, two-way, and broadcast modes are considered. A satellite channel model including filters, limiter, a TWT, detectors, and an optimized equalizer is treated in detail. A complete analysis is presented for one set of system assumptions which exclude nonlinear gain and phase distortion in the TWT. Modulation, demodulation, and channel coding are considered, based on an additive white Gaussian noise channel model which is an idealization of an equalized channel. Source coding with emphasis on video data compression is reviewed, and the experimental facility utilized to test promising techniques is fully described
Evaluation of unidirectional background push content download services for the delivery of television programs
Este trabajo de tesis presenta los servicios de descarga de contenido en modo push como un
mecanismo eficiente para el envĂo de contenido de televisiĂłn pre-producido sobre redes de
difusiĂłn. Hoy en dĂa, los operadores de red dedican una cantidad considerable de recursos
de red a la entrega en vivo de contenido televisivo, tanto sobre redes de difusiĂłn como
sobre conexiones unidireccionales. Esta oferta de servicios responde Ășnicamente a
requisitos comerciales: disponer de los contenidos televisivos en cualquier momento y
lugar. Sin embargo, desde un punto de vista estrictamente acadĂ©mico, el envĂo en vivo es
Ășnicamente un requerimiento para el contenido en vivo, no para contenidos que ya han sido
producidos con anterioridad a su emisiĂłn. MĂĄs aĂșn, la difusiĂłn es solo eficiente cuando el
contenido es suficientemente popular.
Los servicios bajo estudio en esta tesis utilizan capacidad residual en redes de difusiĂłn para
enviar contenido pre-producido para que se almacene en los equipos de usuario. La
propuesta se justifica Ășnicamente por su eficiencia. Por un lado, genera valor de recursos de
red que no se aprovecharĂan de otra manera. Por otro lado, realiza la entrega de contenidos
pre-producidos y populares de la manera mĂĄs eficiente: sobre servicios de descarga de
contenidos en difusiĂłn.
Los resultados incluyen modelos para la popularidad y la duraciĂłn de contenidos, valiosos
para cualquier trabajo de investigaciĂłn basados en la entrega de contenidos televisivos.
AdemĂĄs, la tesis evalĂșa la capacidad residual disponible en redes de difusiĂłn, por medio de
estudios empĂricos. DespuĂ©s, estos resultados son utilizados en simulaciones que evalĂșan
las prestaciones de los servicios propuestos en escenarios diferentes y para aplicaciones
diferentes. La evaluaciĂłn demuestra que este tipo de servicios son un recurso muy Ăștil para
la entrega de contenido televisivo.This thesis dissertation presents background push Content Download Services as an
efficient mechanism to deliver pre-produced television content through existing broadcast
networks. Nowadays, network operators dedicate a considerable amount of network
resources to live streaming live, through both broadcast and unicast connections. This
service offering responds solely to commercial requirements: Content must be available
anytime and anywhere. However, from a strictly academic point of view, live streaming is
only a requirement for live content and not for pre-produced content. Moreover,
broadcasting is only efficient when the content is sufficiently popular.
The services under study in this thesis use residual capacity in broadcast networks to push
popular, pre-produced content to storage capacity in customer premises equipment. The
proposal responds only to efficiency requirements. On one hand, it creates value from
network resources otherwise unused. On the other hand, it delivers popular pre-produced
content in the most efficient way: through broadcast download services.
The results include models for the popularity and the duration of television content,
valuable for any research work dealing with file-based delivery of television content. Later,
the thesis evaluates the residual capacity available in broadcast networks through empirical
studies. These results are used in simulations to evaluate the performance of background
push content download services in different scenarios and for different applications. The
evaluation proves that this kind of services can become a great asset for the delivery of
television contentFraile Gil, F. (2013). Evaluation of unidirectional background push content download services for the delivery of television programs [Tesis doctoral no publicada]. Universitat PolitĂšcnica de ValĂšncia. https://doi.org/10.4995/Thesis/10251/31656TESI
Recommended from our members
MAC-REALM: A video content feature extraction and modelling framework
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.A consequence of the âdata delugeâ is the exponential increase in digital video footage, while the ability to find relevant video clips diminishes. Traditional text based search engines are no longer optimal for searching, as they cannot provide a granular search of the content inside video footage. To be able to search the video in a content based manner, the content features of the video need to be extracted and modelled into a content model, which can then act as a searchable proxy for the video content. This thesis focuses on the extraction of syntactic and semantic content features and content modelling, using machine driven processes, with either little or no user interaction. Our abstract framework design extracts syntactic and semantic content features and compiles them into an integrated content model. The framework integrates a four plane strategy that consists of a pre-processing plane that removes redundant data and filters the media to improve the feature extraction properties of the media; a syntactic feature extraction plane that extracts low level syntactic feature and mid-level syntactic features that have semantic attributes; a semantic relationship analysis and linkage plane, where the spatial and temporal relationships of all the content features are defined, and finally a content modelling stage where the syntactic and semantic content features are integrated into a content model. Each of the four planes can be split into three layers namely, the content layer, where the content to be processed is stored; the application layer, where the content is converted into content descriptions, and the MPEG-7 layer, where content descriptions are serialised. Using MPEG-7 standards to produce the content model will provide wide-ranging interoperability, while facilitating granular multi-content type searches. The framework is aiming to âbridgeâ the semantic gap, by integrating the syntactic and semantic content features from extraction through to modelling. The design of the framework has been implemented into a prototype called MAC-REALM, which has been tested and evaluated for its effectiveness to extract and model content features. Conclusions are drawn about the research output as a whole and whether they have met the objectives. Finally, future work is presented on how concept detection and crowd sourcing can be used with MAC-REALM
- âŠ