173 research outputs found

    Efficient Video Indexing on the Web: A System that Leverages User Interactions with a Video Player

    Full text link
    In this paper, we propose a user-based video indexing method, that automatically generates thumbnails of the most important scenes of an online video stream, by analyzing users' interactions with a web video player. As a test bench to verify our idea we have extended the YouTube video player into the VideoSkip system. In addition, VideoSkip uses a web-database (Google Application Engine) to keep a record of some important parameters, such as the timing of basic user actions (play, pause, skip). Moreover, we implemented an algorithm that selects representative thumbnails. Finally, we populated the system with data from an experiment with nine users. We found that the VideoSkip system indexes video content by leveraging implicit users interactions, such as pause and thirty seconds skip. Our early findings point toward improvements of the web video player and its thumbnail generation technique. The VideSkip system could compliment content-based algorithms, in order to achieve efficient video-indexing in difficult videos, such as lectures or sports.Comment: 9 pages, 3 figures, UCMedia 2010: 2nd International ICST Conference on User Centric Medi

    Summarizing information from Web sites on distributed power generation and alternative energy development

    Get PDF
    The World Wide Web (WWW) has become a huge repository of information and knowledge, and an essential channel for information exchange. Many sites and thousands of pages of information on distributed power generation and alternate energy development are being added or modified constantly and the task of finding the most appropriate information is getting difficult. While search engines are capable to return a collection of links according to key terms and some forms of ranking mechanism, it is still necessary to access the Web page and navigate through the site in order to find the information. This paper proposes an interactive summarization framework called iWISE to facilitate the process by providing a summary of the information on the Web site. The proposed approach makes use of graphical visualization, tag clouds and text summarization. A number of cases are presented and compared in this paper with a discussion on future work

    A web assessment approach based on summarisation and visualisation

    Get PDF
    The number of Web sites has noticeably increased to roughly 224 million in last ten years. This means there is a rapid growth of information on the Internet. Although search engines can help users to filter their desired information, the searched result is normally presented in the form of a very long list, and users have to visit each Web page in order to determine the appropriateness of the result. This leads to a considerable amount of time has to be spent on finding the required information. To address this issue, this paper proposes a Web assessment approach in order to provide an overview of the information on a Website using an integration of existing summarisation and visualisation techniques, which are text summarisation, tag cloud, Document Type View, and interactive features. This approach is capable to reduce the time required to identify and search for information from the Web

    Tools for Managing the Past Web

    Get PDF
    PDF of a powerpoint presentation from the Archive-It Partners Meeting in Montgomery, Alabama, November 18, 2014. Also available on Slideshare.https://digitalcommons.odu.edu/computerscience_presentations/1032/thumbnail.jp

    Tools for Managing the Past Web

    Get PDF
    PDF of a powerpoint presentation from an Old Dominion University - ECE Department Seminar, February 20, 2015. Also available on Slideshare.https://digitalcommons.odu.edu/computerscience_presentations/1039/thumbnail.jp

    Improving Collection Understanding for Web Archives with Storytelling: Shining Light Into Dark and Stormy Archives

    Get PDF
    Collections are the tools that people use to make sense of an ever-increasing number of archived web pages. As collections themselves grow, we need tools to make sense of them. Tools that work on the general web, like search engines, are not a good fit for these collections because search engines do not currently represent multiple document versions well. Web archive collections are vast, some containing hundreds of thousands of documents. Thousands of collections exist, many of which cover the same topic. Few collections include standardized metadata. Too many documents from too many collections with insufficient metadata makes collection understanding an expensive proposition. This dissertation establishes a five-process model to assist with web archive collection understanding. This model aims to produce a social media story – a visualization with which most web users are familiar. Each social media story contains surrogates which are summaries of individual documents. These surrogates, when presented together, summarize the topic of the story. After applying our storytelling model, they summarize the topic of a web archive collection. We develop and test a framework to select the best exemplars that represent a collection. We establish that algorithms produced from these primitives select exemplars that are otherwise undiscoverable using conventional search engine methods. We generate story metadata to improve the information scent of a story so users can understand it better. After an analysis showing that existing platforms perform poorly for web archives and a user study establishing the best surrogate type, we generate document metadata for the exemplars with machine learning. We then visualize the story and document metadata together and distribute it to satisfy the information needs of multiple personas who benefit from our model. Our tools serve as a reference implementation of our Dark and Stormy Archives storytelling model. Hypercane selects exemplars and generates story metadata. MementoEmbed generates document metadata. Raintale visualizes and distributes the story based on the story metadata and the document metadata of these exemplars. By providing understanding immediately, our stories save users the time and effort of reading thousands of documents and, most importantly, help them understand web archive collections

    Web Archive Services Framework for Tighter Integration Between the Past and Present Web

    Get PDF
    Web archives have contained the cultural history of the web for many years, but they still have a limited capability for access. Most of the web archiving research has focused on crawling and preservation activities, with little focus on the delivery methods. The current access methods are tightly coupled with web archive infrastructure, hard to replicate or integrate with other web archives, and do not cover all the users\u27 needs. In this dissertation, we focus on the access methods for archived web data to enable users, third-party developers, researchers, and others to gain knowledge from the web archives. We build ArcSys, a new service framework that extracts, preserves, and exposes APIs for the web archive corpus. The dissertation introduces a novel categorization technique to divide the archived corpus into four levels. For each level, we will propose suitable services and APIs that enable both users and third-party developers to build new interfaces. The first level is the content level that extracts the content from the archived web data. We develop ArcContent to expose the web archive content processed through various filters. The second level is the metadata level; we extract the metadata from the archived web data and make it available to users. We implement two services, ArcLink for temporal web graph and ArcThumb for optimizing the thumbnail creation in the web archives. The third level is the URI level that focuses on using the URI HTTP redirection status to enhance the user query. Finally, the highest level in the web archiving service framework pyramid is the archive level. In this level, we define the web archive by the characteristics of its corpus and building Web Archive Profiles. The profiles are used by the Memento Aggregator for query optimization

    Transformation of an uncertain video search pipeline to a sketch-based visual analytics loop

    Get PDF
    Traditional sketch-based image or video search systems rely on machine learning concepts as their core technology. However, in many applications, machine learning alone is impractical since videos may not be semantically annotated sufficiently, there may be a lack of suitable training data, and the search requirements of the user may frequently change for different tasks. In this work, we develop a visual analytics systems that overcomes the shortcomings of the traditional approach. We make use of a sketch-based interface to enable users to specify search requirement in a flexible manner without depending on semantic annotation. We employ active machine learning to train different analytical models for different types of search requirements. We use visualization to facilitate knowledge discovery at the different stages of visual analytics. This includes visualizing the parameter space of the trained model, visualizing the search space to support interactive browsing, visualizing candidature search results to support rapid interaction for active learning while minimizing watching videos, and visualizing aggregated information of the search results. We demonstrate the system for searching spatiotemporal attributes from sports video to identify key instances of the team and player performance. © 1995-2012 IEEE
    • …
    corecore