693 research outputs found
CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines
Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective.
The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines.
From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research
Recommended from our members
Multimedia delivery in the future internet
The term âNetworked Mediaâ implies that all kinds of media including text, image, 3D graphics, audio
and video are produced, distributed, shared, managed and consumed on-line through various networks,
like the Internet, Fiber, WiFi, WiMAX, GPRS, 3G and so on, in a convergent manner [1]. This white
paper is the contribution of the Media Delivery Platform (MDP) cluster and aims to cover the Networked
challenges of the Networked Media in the transition to the Future of the Internet.
Internet has evolved and changed the way we work and live. End users of the Internet have been confronted
with a bewildering range of media, services and applications and of technological innovations concerning
media formats, wireless networks, terminal types and capabilities. And there is little evidence that the pace
of this innovation is slowing. Today, over one billion of users access the Internet on regular basis, more
than 100 million users have downloaded at least one (multi)media file and over 47 millions of them do so
regularly, searching in more than 160 Exabytes1 of content. In the near future these numbers are expected
to exponentially rise. It is expected that the Internet content will be increased by at least a factor of 6, rising
to more than 990 Exabytes before 2012, fuelled mainly by the users themselves. Moreover, it is envisaged
that in a near- to mid-term future, the Internet will provide the means to share and distribute (new)
multimedia content and services with superior quality and striking flexibility, in a trusted and personalized
way, improving citizensâ quality of life, working conditions, edutainment and safety.
In this evolving environment, new transport protocols, new multimedia encoding schemes, cross-layer inthe
network adaptation, machine-to-machine communication (including RFIDs), rich 3D content as well as
community networks and the use of peer-to-peer (P2P) overlays are expected to generate new models of
interaction and cooperation, and be able to support enhanced perceived quality-of-experience (PQoE) and
innovative applications âon the moveâ, like virtual collaboration environments, personalised services/
media, virtual sport groups, on-line gaming, edutainment. In this context, the interaction with content
combined with interactive/multimedia search capabilities across distributed repositories, opportunistic P2P
networks and the dynamic adaptation to the characteristics of diverse mobile terminals are expected to
contribute towards such a vision.
Based on work that has taken place in a number of EC co-funded projects, in Framework Program 6 (FP6)
and Framework Program 7 (FP7), a group of experts and technology visionaries have voluntarily
contributed in this white paper aiming to describe the status, the state-of-the art, the challenges and the way
ahead in the area of Content Aware media delivery platforms
Design and analysis of stream scheduling algorithms in distributed reservation-based multimedia systems
Ph.DDOCTOR OF PHILOSOPH
Delivery of Personalized and Adaptive Content to Mobile Devices:A Framework and Enabling Technology
Many innovative wireless applications that aim to provide mobile information access are emerging. Since people have different information needs and preferences, one of the challenges for mobile information systems is to take advantage of the convenience of handheld devices and provide personalized information to the right person in a preferred format. However, the unique features of wireless networks and mobile devices pose challenges to personalized mobile content delivery. This paper proposes a generic framework for delivering personalized and adaptive content to mobile users. It introduces a variety of enabling technologies and highlights important issues in this area. The framework can be applied to many applications such as mobile commerce and context-aware mobile services
Integration of ICN and MEC in 5G and beyond networks : mutual benefits, use cases, challenges, standardization, and future research
Multi-access Edge Computing (MEC) is a novel edge computing paradigm that moves cloudbased processing and storage capabilities closer to mobile users by implementing server resources in the access nodes. MEC helps fulfill the stringent requirements of 5G and beyond networks to offer anytimeanywhere connectivity for many devices with ultra-low delay and huge bandwidths. Information-Centric Networking (ICN) is another prominent network technology that builds on a content-centric network architecture to overcome host-centric routing/operation shortcomings and to realize efficient pervasive and ubiquitous networking. It is envisaged to be employed in Future Internet including Beyond 5G (B5G) networks. The consolidation of ICN with MEC technology offers new opportunities to realize that vision and serve advanced use cases. However, various integration challenges are yet to be addressed to enable the wide-scale co-deployment of ICN with MEC in future networks. In this paper, we discuss and elaborate on ICN MEC integration to provide a comprehensive survey with a forward-looking perspective for B5G networks. In that regard, we deduce lessons learned from related works (for both 5G and B5G networks). We present ongoing standardization activities to highlight practical implications of such efforts. Moreover, we render key B5G use cases and highlight the role for ICN MEC integration for addressing their requirements. Finally, we layout research challenges and identify potential research directions. For this last contribution, we also provide a mapping of the latter to ICN integration challenges and use cases
CHORUS Deliverable 2.2: Second report - identification of multi-disciplinary key issues for gap analysis toward EU multimedia search engines roadmap
After addressing the state-of-the-art during the first year of Chorus and establishing the existing landscape in
multimedia search engines, we have identified and analyzed gaps within European research effort during our second year.
In this period we focused on three directions, notably technological issues, user-centred issues and use-cases and socio-
economic and legal aspects. These were assessed by two central studies: firstly, a concerted vision of functional breakdown
of generic multimedia search engine, and secondly, a representative use-cases descriptions with the related discussion on
requirement for technological challenges. Both studies have been carried out in cooperation and consultation with the
community at large through EC concertation meetings (multimedia search engines cluster), several meetings with our
Think-Tank, presentations in international conferences, and surveys addressed to EU projects coordinators as well as
National initiatives coordinators. Based on the obtained feedback we identified two types of gaps, namely core
technological gaps that involve research challenges, and âenablersâ, which are not necessarily technical research
challenges, but have impact on innovation progress. New socio-economic trends are presented as well as emerging legal
challenges
Dependable IPTV Hosting
This research focuses on the challenges of hosting 3rd party RESTful applications that have
to meet specific dependability standards. To provide a proof of concept I have implemented
an architecture and framework for the use case of internet protocol television. Delivering TV
services via internet protocols over high-speed connections is commonly referred to as IPTV
(internet protocol television). Similar to the app-stores of smartphones, IPTV platforms enable
the emergence of IPTV services in which 3rd party developers provide services to consumer
that add value to the IPTV experience. A key issue in the IPTV ecosystem is that currently
telecommunications IPTV providers do not have a system that allows 3rd party developers
to create applications that meet their standards. The main challenges are that the 3rd party
applications must be dependable, scalable and adhere to service level agreements. This research
provides an architecture and framework to overcome these challenges
Quality of experience aware adaptive hypermedia system
The research reported in this thesis proposes, designs and tests a novel Quality of Experience Layer (QoE-layer) for the classic Adaptive Hypermedia Systems (AHS) architecture. Its goal is to improve the end-user perceived Quality of Service in different operational environments suitable for residential users. While the AHSâ main role of delivering personalised content is not altered, its functionality and performance is improved and thus the user satisfaction with the service provided.
The QoE Layer takes into account multiple factors that affect Quality of Experience (QoE), such as Web components and network connection. It uses a novel Perceived Performance Model that takes into consideration a variety of performance metrics, in order to learn about the Web user operational environment characteristics, about changes in network connection and the consequences of these changes on the userâs quality of experience. This model also considers the userâs subjective opinion about his/her QoE, increasing its effectiveness and suggests strategies for tailoring Web content in order to improve QoE. The user related information is modelled using a stereotype-based technique that makes use of probability and distribution theory.
The QoE-Layer has been assessed through both simulations and qualitative evaluation in the educational area (mainly distance learning), when users interact with the system in a low bit rate operational environment.
The simulations have assessed âlearningâ and âadaptabilityâ behaviour of the proposed layer in different and variable home connections when a learning task is performed. The correctness of Perceived Performance Model (PPM) suggestions, access time of the learning process and quantity of transmitted data were analysed. The results show that the QoE layer significantly improves the performance in terms of the access time of the learning process with a reduction in the quantity of data sent by using image compression and/or elimination. A visual quality assessment confirmed that this image quality reduction does not significantly affect the viewersâ perceived quality that was close to âgoodâ perceptual level.
For qualitative evaluation the QoE layer has been deployed on the open-source AHA! system. The goal of this evaluation was to compare the learning outcome, system usability and user satisfaction when AHA! and QoE-ware AHA systems were used. The assessment was performed in terms of learner achievement, learning performance and usability assessment. The results indicate that QoE-aware AHA system did not affect the learning outcome (the students have similar-learning achievements) but the learning performance was improved in terms of study time. Most significantly, QoE-aware AHA provides an important improvement in system usability as indicated by usersâ opinion about their satisfaction related to QoE
Scripts in a Frame: A Framework for Archiving Deferred Representations
Web archives provide a view of the Web as seen by Web crawlers. Because of rapid advancements and adoption of client-side technologies like JavaScript and Ajax, coupled with the inability of crawlers to execute these technologies effectively, Web resources become harder to archive as they become more interactive. At Web scale, we cannot capture client-side representations using the current state-of-the art toolsets because of the migration from Web pages to Web applications. Web applications increasingly rely on JavaScript and other client-side programming languages to load embedded resources and change client-side state. We demonstrate that Web crawlers and other automatic archival tools are unable to archive the resulting JavaScript-dependent representations (what we term deferred representations), resulting in missing or incorrect content in the archives and the general inability to replay the archived resource as it existed at the time of capture.
Building on prior studies on Web archiving, client-side monitoring of events and embedded resources, and studies of the Web, we establish an understanding of the trends contributing to the increasing unarchivability of deferred representations. We show that JavaScript leads to lower-quality mementos (archived Web resources) due to the archival difficulties it introduces. We measure the historical impact of JavaScript on mementos, demonstrating that the increased adoption of JavaScript and Ajax correlates with the increase in missing embedded resources. To measure memento and archive quality, we propose and evaluate a metric to assess memento quality closer to Web usersâ perception.
We propose a two-tiered crawling approach that enables crawlers to capture embedded resources dependent upon JavaScript. Measuring the performance benefits between crawl approaches, we propose a classification method that mitigates the performance impacts of the two-tiered crawling approach, and we measure the frontier size improvements observed with the two-tiered approach. Using the two-tiered crawling approach, we measure the number of client-side states associated with each URI-R and propose a mechanism for storing the mementos of deferred representations.
In short, this dissertation details a body of work that explores the following: why JavaScript and deferred representations are difficult to archive (establishing the term deferred representation to describe JavaScript dependent representations); the extent to which JavaScript impacts archivability along with its impact on current archival tools; a metric for measuring the quality of mementos, which we use to describe the impact of JavaScript on archival quality; the performance trade-offs between traditional archival tools and technologies that better archive JavaScript; and a two-tiered crawling approach for discovering and archiving currently unarchivable descendants (representations generated by client-side user events) of deferred representations to mitigate the impact of JavaScript on our archives.
In summary, what we archive is increasingly different from what we as interactive users experience. Using the approaches detailed in this dissertation, archives can create mementos closer to what users experience rather than archiving the crawlersâ experiences on the Web
- âŠ