28,578 research outputs found

    Extending the 5S Framework of Digital Libraries to support Complex Objects, Superimposed Information, and Content-Based Image Retrieval Services

    Get PDF
    Advanced services in digital libraries (DLs) have been developed and widely used to address the required capabilities of an assortment of systems as DLs expand into diverse application domains. These systems may require support for images (e.g., Content-Based Image Retrieval), Complex (information) Objects, and use of content at fine grain (e.g., Superimposed Information). Due to the lack of consensus on precise theoretical definitions for those services, implementation efforts often involve ad hoc development, leading to duplication and interoperability problems. This article presents a methodology to address those problems by extending a precisely specified minimal digital library (in the 5S framework) with formal definitions of aforementioned services. The theoretical extensions of digital library functionality presented here are reinforced with practical case studies as well as scenarios for the individual and integrative use of services to balance theory and practice. This methodology has implications that other advanced services can be continuously integrated into our current extended framework whenever they are identified. The theoretical definitions and case study we present may impact future development efforts and a wide range of digital library researchers, designers, and developers

    Social media analytics: a survey of techniques, tools and platforms

    Get PDF
    This paper is written for (social science) researchers seeking to analyze the wealth of social media now available. It presents a comprehensive review of software tools for social networking media, wikis, really simple syndication feeds, blogs, newsgroups, chat and news feeds. For completeness, it also includes introductions to social media scraping, storage, data cleaning and sentiment analysis. Although principally a review, the paper also provides a methodology and a critique of social media tools. Analyzing social media, in particular Twitter feeds for sentiment analysis, has become a major research and business activity due to the availability of web-based application programming interfaces (APIs) provided by Twitter, Facebook and News services. This has led to an ‘explosion’ of data services, software tools for scraping and analysis and social media analytics platforms. It is also a research area undergoing rapid change and evolution due to commercial pressures and the potential for using social media data for computational (social science) research. Using a simple taxonomy, this paper provides a review of leading software tools and how to use them to scrape, cleanse and analyze the spectrum of social media. In addition, it discussed the requirement of an experimental computational environment for social media research and presents as an illustration the system architecture of a social media (analytics) platform built by University College London. The principal contribution of this paper is to provide an overview (including code fragments) for scientists seeking to utilize social media scraping and analytics either in their research or business. The data retrieval techniques that are presented in this paper are valid at the time of writing this paper (June 2014), but they are subject to change since social media data scraping APIs are rapidly changing

    Processing and Linking Audio Events in Large Multimedia Archives: The EU inEvent Project

    Get PDF
    In the inEvent EU project [1], we aim at structuring, retrieving, and sharing large archives of networked, and dynamically changing, multimedia recordings, mainly consisting of meetings, videoconferences, and lectures. More specifically, we are developing an integrated system that performs audiovisual processing of multimedia recordings, and labels them in terms of interconnected “hyper-events ” (a notion inspired from hyper-texts). Each hyper-event is composed of simpler facets, including audio-video recordings and metadata, which are then easier to search, retrieve and share. In the present paper, we mainly cover the audio processing aspects of the system, including speech recognition, speaker diarization and linking (across recordings), the use of these features for hyper-event indexing and recommendation, and the search portal. We present initial results for feature extraction from lecture recordings using the TED talks. Index Terms: Networked multimedia events; audio processing: speech recognition; speaker diarization and linking; multimedia indexing and searching; hyper-events. 1

    SUPER: Towards the Use of Social Sensors for Security Assessments and Proactive Management of Emergencies

    Get PDF
    Social media statistics during recent disasters (e.g. the 20 million tweets relating to 'Sandy' storm and the sharing of related photos in Instagram at a rate of 10/sec) suggest that the understanding and management of real-world events by civil protection and law enforcement agencies could benefit from the effective blending of social media information into their resilience processes. In this paper, we argue that despite the widespread use of social media in various domains (e.g. marketing/branding/finance), there is still no easy, standardized and effective way to leverage different social media streams -- also referred to as social sensors -- in security/emergency management applications. We also describe the EU FP7 project SUPER (Social sensors for secUrity assessments and Proactive EmeRgencies management), started in 2014, which aims to tackle this technology gap

    Automatically detecting important moments from everyday life using a mobile device

    Get PDF
    This paper proposes a new method to detect important moments in our lives. Our work is motivated by the increase in the quantity of multimedia data, such as videos and photos, which are capturing life experiences into personal archives. Even though such media-rich data suggests visual processing to identify important moments, the oft-mentioned problem of the semantic gap means that users cannot automatically identify or retrieve important moments using visual processing techniques alone. Our approach utilises on-board sensors from mobile devices to automatically identify important moments, as they are happening

    Microblogging macrochallenges for repositories

    No full text
    The Web offers social scientists and other researchers unprecedented data resources for primary empirical research and for the development of analytical and methodological skills. The ubiquity of confessional interviews across the mass media offers instant access to people’s everyday lives and thoughts. Reflecting on these developments, Savage and Burrows (2007) insist that academic researchers should engage seriously with the digital data that are produced outside the academy and that this data poses fundamental challenges for sociological practice

    Television and the future internet: the NoTube project

    Get PDF
    [1st paragraph] 'New technology is transforming the TV industry', Mark Thompson, BBC Director General told the newspaper The Observer. The classic notion of TV being a set in the living room with finite channels and linear programming is already gone: TV has moved into the world of Internet and mobile technology and content is growing exponentially in terms of number and diversity. The notion of channels is being replaced by individual choice and on-demand programming. Distinctions between TV and other streaming content are blurred: both live in a shared connected online world. We expect that as the Future Internet develops, TV will complete this disruptive paradigm shift into becoming ubiquituous, always-available, and increasingly personalized. NoTube is a EU funded project (in the Objective 4.3 Intelligent Information Management) which began February 2009 and runs for three years, with the goal to prepare TV for the Future Internet – addressing challenges of TV content ubiquity and choice, personalization and integration

    Knowledge will Propel Machine Understanding of Content: Extrapolating from Current Examples

    Full text link
    Machine Learning has been a big success story during the AI resurgence. One particular stand out success relates to learning from a massive amount of data. In spite of early assertions of the unreasonable effectiveness of data, there is increasing recognition for utilizing knowledge whenever it is available or can be created purposefully. In this paper, we discuss the indispensable role of knowledge for deeper understanding of content where (i) large amounts of training data are unavailable, (ii) the objects to be recognized are complex, (e.g., implicit entities and highly subjective content), and (iii) applications need to use complementary or related data in multiple modalities/media. What brings us to the cusp of rapid progress is our ability to (a) create relevant and reliable knowledge and (b) carefully exploit knowledge to enhance ML/NLP techniques. Using diverse examples, we seek to foretell unprecedented progress in our ability for deeper understanding and exploitation of multimodal data and continued incorporation of knowledge in learning techniques.Comment: Pre-print of the paper accepted at 2017 IEEE/WIC/ACM International Conference on Web Intelligence (WI). arXiv admin note: substantial text overlap with arXiv:1610.0770
    corecore