4,597 research outputs found

    Overview of VideoCLEF 2008: Automatic generation of topic-based feeds for dual language audio-visual content

    Get PDF
    The VideoCLEF track, introduced in 2008, aims to develop and evaluate tasks related to analysis of and access to multilingual multimedia content. In its first year, VideoCLEF piloted the Vid2RSS task, whose main subtask was the classification of dual language video (Dutchlanguage television content featuring English-speaking experts and studio guests). The task offered two additional discretionary subtasks: feed translation and automatic keyframe extraction. Task participants were supplied with Dutch archival metadata, Dutch speech transcripts, English speech transcripts and 10 thematic category labels, which they were required to assign to the test set videos. The videos were grouped by class label into topic-based RSS-feeds, displaying title, description and keyframe for each video. Five groups participated in the 2008 VideoCLEF track. Participants were required to collect their own training data; both Wikipedia and general web content were used. Groups deployed various classifiers (SVM, Naive Bayes and k-NN) or treated the problem as an information retrieval task. Both the Dutch speech transcripts and the archival metadata performed well as sources of indexing features, but no group succeeded in exploiting combinations of feature sources to significantly enhance performance. A small scale fluency/adequacy evaluation of the translation task output revealed the translation to be of sufficient quality to make it valuable to a non-Dutch speaking English speaker. For keyframe extraction, the strategy chosen was to select the keyframe from the shot with the most representative speech transcript content. The automatically selected shots were shown, with a small user study, to be competitive with manually selected shots. Future years of VideoCLEF will aim to expand the corpus and the class label list, as well as to extend the track to additional tasks

    Looking at a digital research data archive - Visual interfaces to EASY

    Full text link
    In this paper we explore visually the structure of the collection of a digital research data archive in terms of metadata for deposited datasets. We look into the distribution of datasets over different scientific fields; the role of main depositors (persons and institutions) in different fields, and main access choices for the deposited datasets. We argue that visual analytics of metadata of collections can be used in multiple ways: to inform the archive about structure and growth of its collection; to foster collections strategies; and to check metadata consistency. We combine visual analytics and visual enhanced browsing introducing a set of web-based, interactive visual interfaces to the archive's collection. We discuss how text based search combined with visual enhanced browsing enhances data access, navigation, and reuse.Comment: Submitted to the TPDL 201

    The structure of the Arts & Humanities Citation Index: A mapping on the basis of aggregated citations among 1,157 journals

    Full text link
    Using the Arts & Humanities Citation Index (A&HCI) 2008, we apply mapping techniques previously developed for mapping journal structures in the Science and Social Science Citation Indices. Citation relations among the 110,718 records were aggregated at the level of 1,157 journals specific to the A&HCI, and the journal structures are questioned on whether a cognitive structure can be reconstructed and visualized. Both cosine-normalization (bottom up) and factor analysis (top down) suggest a division into approximately twelve subsets. The relations among these subsets are explored using various visualization techniques. However, we were not able to retrieve this structure using the ISI Subject Categories, including the 25 categories which are specific to the A&HCI. We discuss options for validation such as against the categories of the Humanities Indicators of the American Academy of Arts and Sciences, the panel structure of the European Reference Index for the Humanities (ERIH), and compare our results with the curriculum organization of the Humanities Section of the College of Letters and Sciences of UCLA as an example of institutional organization

    DCU at VideoClef 2008

    Get PDF
    We describe a baseline system for the VideoCLEF Vid2RSS task. The system uses an unaltered off-the-shelf Information Retrieval system. ASR content is indexed using default stemming and stopping methods. The subject categories are populated by using the category label as a query on the collection, and assigning the retrieved items to that particular category. We describe the results of the system and provide some high-level analysis of its performance

    Classification of dual language audio-visual content: Introduction to the VideoCLEF 2008 pilot benchmark evaluation task

    Get PDF
    VideoCLEF is a new track for the CLEF 2008 campaign. This track aims to develop and evaluate tasks in analyzing multilingual video content. A pilot of a Vid2RSS task involving assigning thematic class labels to video kicks off the VideoCLEF track in 2008. Task participants deliver classification results in the form of a series of feeds, one for each thematic class. The data for the task are dual language television documentaries. Dutch is the dominant language and English-language content (mostly interviews) is embedded. Participants are provided with speech recognition transcripts of the data in both Dutch and English, and also with metadata generated by archivists. In addition to the classification task, participants can choose to participate in a translation task (translating the feed into a language of their choice) and a keyframe selection task (choosing a semantically appropriate keyframe for depiction of the videos in the feed)

    Improvement of speed response in four-phase DC–DC converter switching using two shunt voltage-source

    Get PDF
    This study proposes a technique that is able to improve the speed response of a four-phase DC–DC converter switching. The basic concept of the proposed technique is the inclusion of two shunt-connected voltage sources in series to the converter system. Using a higher input voltage to drive the load, a higher current per microsecond output system will be obtained and reverts to its nominal input upon obtaining desired references. Thus, the transient response observed when using this proposed technique is found to be much faster when compared to the conventional converter. Moreover, this technique is easily implemented as it requires only an additional voltage source, power switch, and power diode. The integrated model of the two shunt voltage-source in a four-phase DC–DC converter was simulated in MATLAB/Simulink and validated against the experimental results of a laboratory prototype, 600 W four-phase DC–DC converter. The novelty of this proposed technique is its ability to provide faster operations for critical loads applications, lower output capacitor and lower operating frequency

    Low velocity impact response of rc beam with artificial polyethylene aggregate as concrete block infill

    Get PDF
    In structural design, an ideal situation for saving materials would be to reduce the weight of the structure without having to compromise on its strength and serviceability. A new lightweight composite reinforced concrete section was developed with a novel use of a lightweight concrete block as infill utilizing Artificial Polyethylene Aggregate (APEA and MAPEA). The concrete near the neutral axis acts as a stress transfer medium between the compression and tension zones. Partial replacement of the concrete near the neutral axis could create a reduction in weight and savings in the use of materials. In this experimental work, APEA and MAPEA were utilized as replacement for normal aggregates (NA) at percentages of 0%, 3%, 6%, and 9%, 12%, and 100% in the concrete mix. In this study, the concrete block infill uses the 100% MAPEA as a replacement for coarse aggregate. A total of sixteen beams were prepared measuring 170 mm × 250 mm × 1000 mm, in which four specimens were used as control samples (NRC) and twelve specimens were the reinforced concrete beam incorporated with different size of concrete block infill (RCAI) consisting of 100% MAPEA. All beams were tested with 100 kg steel weight dropped vertically from a height of 0.6 m and 1.54 m, which was equivalent to 3.5 m/s and 5.5 m/s respectively. Based on the experimental results, the impact force, displacement and crack patterns were affected by the impact load. For RCAI specimens, the impact force was larger but smaller displacement value was observed, compared to the NRC specimens. Furthermore, the width of the cracks generated in the RCAI specimens near the mid-span was less than that on the NRC specimen. All experiment results were validated against FEM. The transient impact force histories, displacement and crack patterns obtained from FEM matched reasonably well with the experiment results. The error reported a range of 1% to 15%. The results showed that the proposed use of concrete block infill produced desirable results under the impact loads. The main advantages of the concrete block infill that utilized MAPEA from waste plastic bags due to the weight reduction about 6% in the concrete beams

    NLP and the Humanities: The Revival of an Old Liaison

    Get PDF
    This paper presents an overview of some\ud emerging trends in the application of NLP\ud in the domain of the so-called Digital Humanities\ud and discusses the role and nature\ud of metadata, the annotation layer that is so\ud characteristic of documents that play a role\ud in the scholarly practises of the humanities.\ud It is explained how metadata are the\ud key to the added value of techniques such\ud as text and link mining, and an outline is\ud given of what measures could be taken to\ud increase the chances for a bright future for\ud the old ties between NLP and the humanities.\ud There is no data like metadata

    Analyzing Ancient Maya Glyph Collections with Contextual Shape Descriptors

    Get PDF
    This paper presents an original approach for shape-based analysis of ancient Maya hieroglyphs based on an interdisciplinary collaboration between computer vision and archeology. Our work is guided by realistic needs of archaeologists and scholars who critically need support for search and retrieval tasks in large Maya imagery collections. Our paper has three main contributions. First, we introduce an overview of our interdisciplinary approach towards the improvement of the documentation, analysis, and preservation of Maya pictographic data. Second, we present an objective evaluation of the performance of two state-of-the-art shape-based contextual descriptors (Shape Context and Generalized Shape Context) in retrieval tasks, using two datasets of syllabic Maya glyphs. Based on the identification of their limitations, we propose a new shape descriptor named Histogram of Orientation Shape Context (HOOSC), which is more robust and suitable for description of Maya hieroglyphs. Third, we present what to our knowledge constitutes the first automatic analysis of visual variability of syllabic glyphs along historical periods and across geographic regions of the ancient Maya world via the HOOSCdescriptor. Overall, our approach is promising, as it improves performance on the retrieval task, has been successfully validated under an epigraphic viewpoint, and has the potential of offering both novel insights in archeology and practical solutions for real daily scholar need
    corecore