14,696 research outputs found

    Scenemash: Multimodal Route Summarization for City Exploration

    Get PDF
    The potential of mining tourist information from social multimedia data gives rise to new applications offering much richer impressions of the city. In this paper we propose Scenemash, a system that generates multimodal summaries of multiple alternative routes between locations in a city. To get insight into the geographic areas on the route, we collect a dataset of community-contributed images and their associated annotations from Foursquare and Flickr. We identify images and terms representative of a geographic area by jointly analysing distributions of a large number of semantic concepts detected in the visual content and latent topics extracted from associated text. Scenemash prototype is implemented as an Android app for smartphones and smartwatches

    Rethinking summarization and storytelling for modern social multimedia

    Get PDF
    Traditional summarization initiatives have been focused on specific types of documents such as articles, reviews, videos, image feeds, or tweets, a practice which may result in pigeonholing the summarization task in the context of modern, content-rich multimedia collections. Consequently, much of the research to date has revolved around mostly toy problems in narrow domains and working on single-source media types. We argue that summarization and story generation systems need to re-focus the problem space in order to meet the information needs in the age of user-generated content in different formats and languages. Here we create a framework for flexible multimedia storytelling. Narratives, stories, and summaries carry a set of challenges in big data and dynamic multi-source media that give rise to new research in spatial-temporal representation, viewpoint generation, and explanatio

    The role of earth observation in an integrated deprived area mapping “system” for low-to-middle income countries

    Get PDF
    Urbanization in the global South has been accompanied by the proliferation of vast informal and marginalized urban areas that lack access to essential services and infrastructure. UN-Habitat estimates that close to a billion people currently live in these deprived and informal urban settlements, generally grouped under the term of urban slums. Two major knowledge gaps undermine the efforts to monitor progress towards the corresponding sustainable development goal (i.e., SDG 11—Sustainable Cities and Communities). First, the data available for cities worldwide is patchy and insufficient to differentiate between the diversity of urban areas with respect to their access to essential services and their specific infrastructure needs. Second, existing approaches used to map deprived areas (i.e., aggregated household data, Earth observation (EO), and community-driven data collection) are mostly siloed, and, individually, they often lack transferability and scalability and fail to include the opinions of different interest groups. In particular, EO-based-deprived area mapping approaches are mostly top-down, with very little attention given to ground information and interaction with urban communities and stakeholders. Existing top-down methods should be complemented with bottom-up approaches to produce routinely updated, accurate, and timely deprived area maps. In this review, we first assess the strengths and limitations of existing deprived area mapping methods. We then propose an Integrated Deprived Area Mapping System (IDeAMapS) framework that leverages the strengths of EO- and community-based approaches. The proposed framework offers a way forward to map deprived areas globally, routinely, and with maximum accuracy to support SDG 11 monitoring and the needs of different interest groups

    ITR/IM: Enabling the Creation and Use of GeoGrids for Next Generation Geospatial Information

    Get PDF
    The objective of this project is to advance science in information management, focusing in particular on geospatial information. It addresses the development of concepts, algorithms, and system architectures to enable users on a grid to query, analyze, and contribute to multivariate, quality-aware geospatial information. The approach consists of three complementary research areas: (1) establishing a statistical framework for assessing geospatial data quality; (2) developing uncertainty-based query processing capabilities; and (3) supporting the development of space- and accuracy-aware adaptive systems for geospatial datasets. The results of this project will support the extension of the concept of the computational grid to facilitate ubiquitous access, interaction, and contributions of quality-aware next generation geospatial information. By developing novel query processes as well as quality and similarity metrics the project aims to enable the integration and use of large collections of disperse information of varying quality and accuracy. This supports the evolution of a novel geocomputational paradigm, moving away from current standards-driven approaches to an inclusive, adaptive system, with example potential applications in mobile computing, bioinformatics, and geographic information systems. This experimental research is linked to educational activities in three different academic programs among the three participating sites. The outreach activities of this project include collaboration with U.S. federal agencies involved in geospatial data collection, an international partner (Brazil\u27s National Institute for Space Research), and the organization of a 2-day workshop with the participation of U.S. and international experts

    Multimodal Classification of Urban Micro-Events

    Get PDF
    In this paper we seek methods to effectively detect urban micro-events. Urban micro-events are events which occur in cities, have limited geographical coverage and typically affect only a small group of citizens. Because of their scale these are difficult to identify in most data sources. However, by using citizen sensing to gather data, detecting them becomes feasible. The data gathered by citizen sensing is often multimodal and, as a consequence, the information required to detect urban micro-events is distributed over multiple modalities. This makes it essential to have a classifier capable of combining them. In this paper we explore several methods of creating such a classifier, including early, late, hybrid fusion and representation learning using multimodal graphs. We evaluate performance on a real world dataset obtained from a live citizen reporting system. We show that a multimodal approach yields higher performance than unimodal alternatives. Furthermore, we demonstrate that our hybrid combination of early and late fusion with multimodal embeddings performs best in classification of urban micro-events
    corecore