639 research outputs found

    The Touchstone Project: Saving and Sharing Montana's Community Heritage

    Get PDF
    The Touchstone Project is a multidisciplinary program to help communities preserve their heritage and threatened historic places, and share their human experience with a broad audience. This pioneering approach takes traditional efforts to collect and digitize historic materials and oral interviews, and makes them far more dynamic, relevant and accessible through a digital archive that will reside in a local, historical repository, and be uploaded to the state's online memory project. Additionally, we will invite new information, tap new audiences and share the content across the worldwide web through social networking. Professional historians and trained curators will pilot this innovative effort with people in four small towns, ensuring that materials are handled, housed, and digitized according to the highest curatorial standards in order to save threatened heritage while creating a hopeful model for celebrating history and reinvigorating neighborhoods and communities

    Application of semantic web technologies for automatic multimedia annotation

    Get PDF

    Exploring the academic invisible web

    Get PDF
    Purpose: To provide a critical review of Bergman's 2001 study on the Deep Web. In addition, we bring a new concept into the discussion, the Academic Invisible Web (AIW). We define the Academic Invisible Web as consisting of all databases and collections relevant to academia but not searchable by the general-purpose internet search engines. Indexing this part of the Invisible Web is central to scientific search engines. We provide an overview of approaches followed thus far. Design/methodology/approach: Discussion of measures and calculations, estimation based on informetric laws. Literature review on approaches for uncovering information from the Invisible Web. Findings: Bergman's size estimate of the Invisible Web is highly questionable. We demonstrate some major errors in the conceptual design of the Bergman paper. A new (raw) size estimate is given. Research limitations/implications: The precision of our estimate is limited due to a small sample size and lack of reliable data. Practical implications: We can show that no single library alone will be able to index the Academic Invisible Web. We suggest collaboration to accomplish this task. Originality/value: Provides library managers and those interested in developing academic search engines with data on the size and attributes of the Academic Invisible Web.Comment: 13 pages, 3 figure

    Unified access to media metadata on the web: Towards interoperability using a core vocabulary.

    Get PDF
    The goal of the W3C's Media Annotation Working Group (MAWG) is to promote interoperability between multimedia metadata formats on the Web. As experienced by everybody, audiovisual data is omnipresent on today's Web. However, different interaction interfaces and especially diverse metadata formats prevent unified search, access, and navigation. MAWG has addressed this issue by developing an interlingua ontology and an associated API. This article discusses the rationale and core concepts of the ontology and API for media resources. The specifications developed by MAWG enable interoperable contextualized and semantic annotation and search, independent of the source metadata format, and connecting multimedia data to the Linked Data cloud. Some demonstrators of such applications are also presented in this article

    Developing Learning Content Management Systems Based on Learning Objects-- Issues and Opportunities

    Get PDF
    Abstract – E-learning industry is looking forward the day when teachers and learners could manage e-learning “on the fly”. It’s supposed that learning content can be personalized, assembled and accessed on demand. Development teams would be able to build contents a single time, store it electronically in different formats and reuse it with a click on few buttons. This will be possible through using Learning Objects concept. According to some e-learning professionals the day has dawned, but to others it is still a distant future. In this paper we are examining this concept and facts “pro and con”

    Towards robust and reliable multimedia analysis through semantic integration of services

    Get PDF
    Thanks to ubiquitous Web connectivity and portable multimedia devices, it has never been so easy to produce and distribute new multimedia resources such as videos, photos, and audio. This ever-increasing production leads to an information overload for consumers, which calls for efficient multimedia retrieval techniques. Multimedia resources can be efficiently retrieved using their metadata, but the multimedia analysis methods that can automatically generate this metadata are currently not reliable enough for highly diverse multimedia content. A reliable and automatic method for analyzing general multimedia content is needed. We introduce a domain-agnostic framework that annotates multimedia resources using currently available multimedia analysis methods. By using a three-step reasoning cycle, this framework can assess and improve the quality of multimedia analysis results, by consecutively (1) combining analysis results effectively, (2) predicting which results might need improvement, and (3) invoking compatible analysis methods to retrieve new results. By using semantic descriptions for the Web services that wrap the multimedia analysis methods, compatible services can be automatically selected. By using additional semantic reasoning on these semantic descriptions, the different services can be repurposed across different use cases. We evaluated this problem-agnostic framework in the context of video face detection, and showed that it is capable of providing the best analysis results regardless of the input video. The proposed methodology can serve as a basis to build a generic multimedia annotation platform, which returns reliable results for diverse multimedia analysis problems. This allows for better metadata generation, and improves the efficient retrieval of multimedia resources

    Right to Know: A Diet of the Future Presently Upon Us

    Get PDF

    LESIM: A Novel Lexical Similarity Measure Technique for Multimedia Information Retrieval

    Get PDF
    Metadata-based similarity measurement is far from obsolete in our days, despite research’s focus on content and context. It allows for aggregating information from textual references, measuring similarity when content is not available, traditional keyword search in search engines, merging results in meta-search engines and many more research and industry interesting activities. Existing similarity measures do not take into consideration neither the unique nature of multimedia’s metadata nor the requirements of metadata-based information retrieval of multimedia. This work proposes a customised for the commonly available author-title multimedia metadata hybrid similarity measure that is shown through experimentation to be significantly more effective than baseline measures
    • …
    corecore