29 research outputs found

    Multi-document text summarization using text clustering for Arabic Language

    Get PDF
    The process of multi-document summarization is producing a single summary of a collection of related documents. In this work we focus on generic extractive Arabic multi-document summarizers. We also describe the cluster approach for multi-document summarization. The problem with multi-document text summarization is redundancy of sentences, and thus, redundancy must be eliminated to ensure coherence, and improve readability. Hence, we set out the main objective as to examine multi-document summarization salient information for text Arabic summarization task with noisy and redundancy information. In this research we used Essex Arabic Summaries Corpus (EASC) as data to test and achieve our main objective and of course its subsequent subobjectives. We used the token process to split the original text into words, and then removed all the stop words, and then we extract the root of each word, and then represented the text as bag of words by TFIDF without the noisy information. In the second step we applied the K-means algorithm with cosine similarity in our experimental to select the best cluster based on cluster ordering by distance performance. We applied SVM to order the sentences after selected the best cluster, then we selected the highest weight sentences for the final summary to reduce redundancy information. Finally, the final summary results for the ten categories of related documents are evaluated using Recall and Precision with the best Recall achieved is 0.6 and Precision is 0.6

    BIBS: A Lecture Webcasting System

    Get PDF
    The Berkeley Internet Broadcasting System (BIBS) is a lecture webcasting system developed and operated by the Berkeley Multimedia Research Center. The system offers live remote viewing and on-demand replay of course lectures using streaming audio and video over the Internet. During the Fall 2000 semester 14 classes were webcast, including several large lower division classes, with a total enrollment of over 4,000 students. Lectures were played over 15,000 times per month during the semester. The primary use of the webcasts is to study for examinations. Students report they watch BIBS lectures because they did not understand material presented in lecture, because they wanted to review what the instructor said about selected topics, because they missed a lecture, and/or because they had difficulty understanding the speaker (e.g., non-native English speakers). Analysis of various survey data suggests that more than 50% of the students enrolled in some large classes view lectures and that as many as 75% of the lectures are played by members of the Berkeley community. Faculty attitudes vary about the virtues of lecture webcasting. Some question the use of this technology while others believe it is a valuable aid to education. Further study is required to accurately assess the pedagogical impact that lecture webcasts have on student learning

    The TRECVID 2007 BBC rushes summarization evaluation pilot

    Get PDF
    This paper provides an overview of a pilot evaluation of video summaries using rushes from several BBC dramatic series. It was carried out under the auspices of TRECVID. Twenty-two research teams submitted video summaries of up to 4% duration, of 42 individual rushes video files aimed at compressing out redundant and insignificant material. The output of two baseline systems built on straightforward content reduction techniques was contributed by Carnegie Mellon University as a control. Procedures for developing ground truth lists of important segments from each video were developed at Dublin City University and applied to the BBC video. At NIST each summary was judged by three humans with respect to how much of the ground truth was included, how easy the summary was to understand, and how much repeated material the summary contained. Additional objective measures included: how long it took the system to create the summary, how long it took the assessor to judge it against the ground truth, and what the summary's duration was. Assessor agreement on finding desired segments averaged 78% and results indicate that while it is difficult to exceed the performance of baselines, a few systems did

    Beyond the Frame: Single and mutilple video summarization method with user-defined length

    Full text link
    Video smmarization is a crucial method to reduce the time of videos which reduces the spent time to watch/review a long video. This apporach has became more important as the amount of publisehed video is increasing everyday. A single or multiple videos can be summarized into a relatively short video using various of techniques from multimodal audio-visual techniques, to natural language processing approaches. Audiovisual techniques may be used to recognize significant visual events and pick the most important parts, while NLP techniques can be used to evaluate the audio transcript and extract the main sentences (timestamps) and corresponding video frames from the original video. Another approach is to use the best of both domain. Meaning that we can use audio-visual cues as well as video transcript to extract and summarize the video. In this paper, we combine a variety of NLP techniques (extractive and contect-based summarizers) with video processing techniques to convert a long video into a single relatively short video. We design this toll in a way that user can specify the relative length of the summarized video. We have also explored ways of summarizing and concatenating multiple videos into a single short video which will help having most important concepts from the same subject in a single short video. Out approach shows that video summarizing is a difficult but significant work, with substantial potential for further research and development, and it is possible thanks to the development of NLP models

    Collective intelligence within web video

    Get PDF

    A Map for Big Data Research in Digital Humanities

    Get PDF
    This article is an attempt to represent Big Data research in digital humanities as a structured research field. A division in three concentric areas of study is presented. Challenges in the first circle – focusing on the processing and interpretations of large cultural datasets – can be organized linearly following the data processing pipeline. Challenges in the second circle – concerning digital culture at large – can be structured around the different relations linking massive datasets, large communities, collective discourses, global actors, and the software medium. Challenges in the third circle – dealing with the experience of big data – can be described within a continuous space of possible interfaces organized around three poles: immersion, abstraction, and language. By identifying research challenges in all these domains, the article illustrates how this initial cartography could be helpful to organize the exploration of the various dimensions of Big Data Digital Humanities research

    Audio browsing of automaton-based hypertext

    Get PDF
    With the wide-spread adoption of hypermedia systems and the World Wide Web (WWW) in particular, these systems have evolved from simple systems with only textual content to those that incorporate a large content base, which consists of a wide variety of document types. Also, with the increase in the number of users, there has grown a need for these systems to be accessible to a wider range of users. Consequently, the growth of the systems along with the number and variety of users require new presentation and navigation mechanisms for a wider audience. One of the new presentation methods is the audio-only presentation of hypertext content and this research proposes a novel solution to this problem for complex and dynamic systems. The hypothesis is that the proposed Audio Browser is an efficient tool for presenting hypertext in audio format, which will prove to be useful for several applications including browsers for visually-impaired and remote users. The Audio Browser provides audio-only browsing of contents in a Petri-based hypertext system called Context-Aware Trellis (caT). It uses a combination of synthesized speech and pre-recorded speech to allow its user to listen to contents of documents, follow links, and get information about the navigation process. It also has mechanisms for navigating within documents in order to allow users to view contents more quickly
    corecore