1,046 research outputs found

    CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines

    Get PDF
    Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective. The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines. From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research

    Identifying music documents in a collection of images

    Get PDF
    Digital libraries and search engines are now well-equipped to find images of documents based on queries. Many images of music scores are now available, often mixed up with textual documents and images. For example, using the Google “images” search feature, a search for “Beethoven” will return a number of scores and manuscripts as well as pictures of the composer. In this paper we report on an investigation into methods to mechanically determine if a particular document is indeed a score, so that the user can specify that only musical scores should be returned. The goal is to find a minimal set of features that can be used as a quick test that will be applied to large numbers of documents. A variety of filters were considered, and two promising ones (run-length ratios and Hough transform) were evaluated. We found that a method based around run-lengths in vertical scans (RL) that out-performs a comparable algorithm using the Hough transform (HT). On a test set of 1030 images, RL achieved recall and precision of 97.8% and 88.4% respectively while HT achieved 97.8% and 73.5%. In terms of processor time, RL was more than five times as fast as HT

    How Cover Images Represent Video Content: A Case Study of Bilibili

    Get PDF
    User generated videos are the most prevalent online products on social media platforms nowadays. In this context, thumbnails (or cover images) serve the important role of representing the video content and attracting viewers’ attention. In this study, we conducted a content analysis of cover images on the Bilibili video-sharing platform, the Chinese counterpart to YouTube, where content creators can upload videos and design their own cover images rather than using automatically generated thumbnails. We extracted four components – snapshot, background, text overlay, and face – that content creators use most often in cover images. We found that the use of different components and their combinations varies in cover images for videos of different duration. The study sheds light on human input into video representation and addresses a gap in the literature, as video thumbnails have previously been studied mainly as the output of automatic generation by algorithms

    The FĂ­schlĂĄr digital video recording, analysis, and browsing system

    Get PDF
    In digital video indexing research area an important technique is called shot boundary detection which automatically segments long video material into camera shots using content-based analysis of video. We have been working on developing various shot boundary detection and representative frame selection techniques to automatically index encoded video stream and provide the end users with video browsing/navigation feature. In this paper we describe a demonstrator digital video system that allows the user to record a TV broadcast programme to MPEG-1 file format and to easily browse and playback the file content online. The system incorporates the shot boundary detection and representative frame selection techniques we have developed and has become a full-featured digital video system that not only demonstrates any further techniques we will develop, but also obtains users’ video browsing behaviour. At the moment the system has a real-user base of about a hundred people and we are closely monitoring how they use the video browsing/navigation feature which the system provides

    CulturAI: Semantic Enrichment of Cultural Data Leveraging Artificial Intelligence

    Get PDF
    In this paper, we propose an innovative tool able to enrich cultural and creative spots (gems, hereinafter) extracted from the European Commission Cultural Gems portal, by suggesting relevant keywords (tags) and YouTube videos (represented with proper thumbnails). On the one hand, the system queries the YouTube search portal, selects the videos most related to the given gem, and extracts a set of meaningful thumbnails for each video. On the other hand, each tag is selected by identifying semantically related popular search queries (i.e., trends). In particular, trends are retrieved by querying the Google Trends platform. A further novelty is that our system suggests contents in a dynamic way. Indeed, as for both YouTube and Google Trends platforms the results of a given query include the most popular videos/trends, such that a gem may constantly be updated with trendy content by periodically running the tool. The system has been tested on a set of gems and evaluated with the support of human annotators. The results highlighted the effectiveness of our proposal

    Predicting Popularity of Hedonic Digital Content via Artificial Intelligence Imagery Analysis of Thumbnails

    Get PDF
    Hedonic digital content backs a wide variety of business models. Yet, due to its experience good nature, consumers cannot assess its value before consumption. To overcome this obstacle, thumbnail images are frequently employed to provide an experience of content, and trigger views and sales. In spite of fragmented evidence from human-computer interaction research, thumbnails largely constitute a black box for research and practice. This research aims to fill this gap and asks: How and why do basic, conceptual and social features of thumbnail images affect popularity of hedonic digital content? To answer the question, we employ artificial intelligence imagery analysis to test and confirm a variance model against evidence from 400,000 YouTube videos. Our findings entail important theoretical contributions to visual perception in online contexts. In addition, this research proposes artificial intelligence imagery analysis as a new and fruitful research method for the largely visual information systems discipline
    • 

    corecore