2,541 research outputs found
CHORUS Deliverable 3.4: Vision Document
The goal of the CHORUS Vision Document is to create a high level vision on audio-visual search engines in order to give guidance to the future R&D work in this area and to highlight trends and challenges in this domain. The vision of CHORUS is strongly connected to the CHORUS Roadmap Document (D2.3). A concise document integrating the outcomes of the two deliverables will be prepared for the end of the project (NEM Summit)
Recommended from our members
Zapping index: Using smile to measure advertisement zapping likelihood
In marketing and advertising research, 'zapping' is defined as the action when a viewer stops watching a commercial. Researchers analyze users' behavior in order to prevent zapping which helps advertisers to design effective commercials. Since emotions can be used to engage consumers, in this paper, we leverage automated facial expression analysis to understand consumers' zapping behavior. Firstly, we provide an accurate moment-to-moment smile detection algorithm. Secondly, we formulate a binary classification problem (zapping/non-zapping) based on real-world scenarios, and adopt smile response as the feature to predict zapping. Thirdly, to cope with the lack of a metric in advertising evaluation, we propose a new metric called Zapping Index (ZI). ZI is a moment-to-moment measurement of a user's zapping probability. It gauges not only the reaction of a user, but also the preference of a user to commercials. Finally, extensive experiments are performed to provide insights and we make recommendations that will be useful to both advertisers and advertisement publishers
Audio/visual analysis for high-speed TV advertisement detection from MPEG bitstream
Advertisement breaks dunng or between television programmes are typically flagged by senes of black-and-silent video frames, which recurrendy occur in order to audio-visually separate individual advertisement spots from one another. It is the regular prevalence of these flags that enables automatic differentiauon between what is programme content and what is advertisement break. Detection of these audio-visual depressions within broadcast television content provides a basis on which advertisement detection may be achieved. This document reports on the progress made in the development of this idea into an advertisement detector system that automatically detects the advertisement breaks direcdy from the MPEG-1 encoded bitstream of digitally captured television broadcasts
CHORUS Deliverable 3.3: Vision Document - Intermediate version
The goal of the CHORUS vision document is to create a high level vision on audio-visual search engines in order to give guidance to the future R&D work in this area (in line with the mandate of CHORUS as a Coordination Action).
This current intermediate draft of the CHORUS vision document (D3.3) is based on the previous CHORUS vision documents D3.1 to D3.2 and on the results of the six CHORUS Think-Tank meetings held in March, September and November 2007 as well as in April, July and October 2008, and on the feedback from other CHORUS events.
The outcome of the six Think-Thank meetings will not just be to the benefit of the participants which are stakeholders and experts from academia and industry â CHORUS, as a coordination action of the EC, will feed back the findings (see Summary) to the projects under its purview and, via its website, to the whole community working in the domain of AV content search.
A few subjections of this deliverable are to be completed after the eights (and presumably last) Think-Tank meeting in spring 2009
Research in information managment at Dublin City University
The Information Management Group at Dublin City University has research themes such as digital multimedia, interoperable systems and database engineering. In the area of digital multimedia, a collaboration with our School of Electronic Engineering has formed the Centre for Digital Video Processing, a university designated research centre whose aim is to research, develop and evaluate content-based operations on digital video information. To achieve this goal, the range of expertise in this centre covers the complete gamut from image analysis and feature extraction through to video search engine technology and interfaces to video browsing. The Interoperable Systems Group has research interests in federated databases and interoperability, object modelling and database engineering. This report describes the research activities of the major groupings within the Information Management community in Dublin City
University
CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines
Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective.
The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines.
From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research
Using association rule mining to enrich semantic concepts for video retrieval
In order to achieve true content-based information retrieval on video we should analyse and index video with
high-level semantic concepts in addition to using user-generated tags and structured metadata like title, date,
etc. However the range of such high-level semantic concepts, detected either manually or automatically,
usually limited compared to the richness of information content in video and the potential vocabulary of
available concepts for indexing. Even though there is work to improve the performance of individual concept
classiïŹers, we should strive to make the best use of whatever partial sets of semantic concept occurrences
are available to us. We describe in this paper our method for using association rule mining to automatically
enrich the representation of video content through a set of semantic concepts based on concept co-occurrence
patterns. We describe our experiments on the TRECVid 2005 video corpus annotated with the 449 concepts
of the LSCOM ontology. The evaluation of our results shows the usefulness of our approach
Digital audio watermarking with semi-blind detection for in-car music content identification
Recent developments in audio watermarking techniques have gone some way towards promoting an industry-wide acceptance of digital audio watermarking as a process that will eventually be used in all audio (and video) production. The predominant focus of such watermarking research has been in the area of content protection, because the prevention of illegal copying is an area of concern for content owners. However, digital audio watermarking may also be used for other purposes, such as the added-value option of real-time content identification of music. While computer-based users of music enjoy the opportunity to identify unknown audio using online tools, identification of audio in an offline domestic or in-car scenario is not so easily achieved. This paper discusses with an area of digital audio watermarking that would facilitate real-time in-car identification of the artists, title and/or other meta-data relating to music being broadcast by radio
Battle of the Brains: Election-Night Forecasting at the Dawn of the Computer Age
This dissertation examines journalists' early encounters with computers as tools for news reporting, focusing on election-night forecasting in 1952. Although election night 1952 is frequently mentioned in histories of computing and journalism as a quirky but seminal episode, it has received little scholarly attention. This dissertation asks how and why election night and the nascent field of television news became points of entry for computers in news reporting.
The dissertation argues that although computers were employed as pathbreaking "electronic brains" on election night 1952, they were used in ways consistent with a long tradition of election-night reporting. As central events in American culture, election nights had long served to showcase both news reporting and new technology, whether with 19th-century devices for displaying returns to waiting crowds or with 20th-century experiments in delivering news by radio.
In 1952, key players - television news broadcasters, computer manufacturers, and critics - showed varied reactions to employing computers for election coverage. But this computer use in 1952 did not represent wholesale change. While live use of the new technology was a risk taken by broadcasters and computer makers in a quest for attention, the underlying methodology of forecasting from early returns did not represent a sharp break with pre-computer approaches. And while computers were touted in advance as key features of election-night broadcasts, the "electronic brains" did not replace "human brains" as primary sources of analysis on election night in 1952.
This case study chronicles the circumstances under which a new technology was employed by a relatively new form of the news media. On election night 1952, the computer was deployed not so much to revolutionize news reporting as to capture public attention. It functioned in line with existing values and practices of election-night journalism. In this important instance, therefore, the new technology's technical features were less a driving force for adoption than its usefulness as a wonder and as a symbol to enhance the prestige of its adopters. This suggests that a new technology's capacity to provide both technical and symbolic social utility can be key to its chances for adoption by the news media
- âŠ