15,216 research outputs found

    Web Video in Numbers - An Analysis of Web-Video Metadata

    Full text link
    Web video is often used as a source of data in various fields of study. While specialized subsets of web video, mainly earmarked for dedicated purposes, are often analyzed in detail, there is little information available about the properties of web video as a whole. In this paper we present insights gained from the analysis of the metadata associated with more than 120 million videos harvested from two popular web video platforms, vimeo and YouTube, in 2016 and compare their properties with the ones found in commonly used video collections. This comparison has revealed that existing collections do not (or no longer) properly reflect the properties of web video "in the wild".Comment: Dataset available from http://download-dbis.dmi.unibas.ch/WWIN

    The NASA, Marshall Space Flight Center drop tube user's manual

    Get PDF
    A comprehensive description of the structural and instrumentation hardware and the experimental capabilities of the 105-meter Marshall Space Flight Center Drop Tube Facility is given. This document is to serve as a guide to the investigator who wishes to perform materials processing experiments in the Drop Tube. Particular attention is given to the Tube's hardware to which an investigator must interface to perform experiments. This hardware consists of the permanent structural hardware (with such items as vacuum flanges), and the experimental hardware (with the furnaces and the sample insertion devices). Two furnaces, an electron-beam and an electromagnetic levitator, are currently used to melt metallic samples in a process environment that can range from 10(exp -6) Torr to 1 atmosphere. Details of these furnaces, the processing environment gases/vacuum, the electrical power, and data acquisition capabilities are specified to allow an investigator to design his/her experiment to maximize successful results and to reduce experimental setup time on the Tube. Various devices used to catch samples while inflicting minimum damage and to enhance turnaround time between experiments are described. Enough information is provided to allow an investigator who wishes to build his/her own furnace or sample catch devices to easily interface it to the Tube. The experimental instrumentation and data acquisition systems used to perform pre-drop and in-flight measurements of the melting and solidification process are also detailed. Typical experimental results are presented as an indicator of the type of data that is provided by the Drop Tube Facility. A summary bibliography of past Drop Tube experiments is provided, and an appendix explaining the noncontact temperature determination of free-falling drops is provided. This document is to be revised occasionally as improvements to the Facility are made and as the summary bibliography grows

    "You Tube and I Find" - personalizing multimedia content access

    Full text link
    Recent growth in broadband access and proliferation of small personal devices that capture images and videos has led to explosive growth of multimedia content available everywhereVfrom personal disks to the Web. While digital media capture and upload has become nearly universal with newer device technology, there is still a need for better tools and technologies to search large collections of multimedia data and to find and deliver the right content to a user according to her current needs and preferences. A renewed focus on the subjective dimension in the multimedia lifecycle, fromcreation, distribution, to delivery and consumption, is required to address this need beyond what is feasible today. Integration of the subjective aspects of the media itselfVits affective, perceptual, and physiological potential (both intended and achieved), together with those of the users themselves will allow for personalizing the content access, beyond today’s facility. This integration, transforming the traditional multimedia information retrieval (MIR) indexes to more effectively answer specific user needs, will allow a richer degree of personalization predicated on user intention and mode of interaction, relationship to the producer, content of the media, and their history and lifestyle. In this paper, we identify the challenges in achieving this integration, current approaches to interpreting content creation processes, to user modelling and profiling, and to personalized content selection, and we detail future directions. The structure of the paper is as follows: In Section I, we introduce the problem and present some definitions. In Section II, we present a review of the aspects of personalized content and current approaches for the same. Section III discusses the problem of obtaining metadata that is required for personalized media creation and present eMediate as a case study of an integrated media capture environment. Section IV presents the MAGIC system as a case study of capturing effective descriptive data and putting users first in distributed learning delivery. The aspects of modelling the user are presented as a case study in using user’s personality as a way to personalize summaries in Section V. Finally, Section VI concludes the paper with a discussion on the emerging challenges and the open problems

    Text Mining Infrastructure in R

    Get PDF
    During the last decade text mining has become a widely used discipline utilizing statistical and machine learning methods. We present the tm package which provides a framework for text mining applications within R. We give a survey on text mining facilities in R and explain how typical application tasks can be carried out using our framework. We present techniques for count-based analysis methods, text clustering, text classification and string kernels.

    In Vitro Fertilization and the Wisdom Of the Roman Catholic Church

    Get PDF

    In-Style: Bridging Text and Uncurated Videos with Style Transfer for Text-Video Retrieval

    Full text link
    Large-scale noisy web image-text datasets have been proven to be efficient for learning robust vision-language models. However, when transferring them to the task of video retrieval, models still need to be fine-tuned on hand-curated paired text-video data to adapt to the diverse styles of video descriptions. To address this problem without the need for hand-annotated pairs, we propose a new setting, text-video retrieval with uncurated & unpaired data, that during training utilizes only text queries together with uncurated web videos without any paired text-video data. To this end, we propose an approach, In-Style, that learns the style of the text queries and transfers it to uncurated web videos. Moreover, to improve generalization, we show that one model can be trained with multiple text styles. To this end, we introduce a multi-style contrastive training procedure that improves the generalizability over several datasets simultaneously. We evaluate our model on retrieval performance over multiple datasets to demonstrate the advantages of our style transfer framework on the new task of uncurated & unpaired text-video retrieval and improve state-of-the-art performance on zero-shot text-video retrieval.Comment: Published at ICCV 2023, code: https://github.com/ninatu/in_styl
    • …
    corecore