79 research outputs found

    Mapping the visual magnitude of popular tourist sites in Edinburgh city

    Get PDF
    There is value in being able to automatically measure and visualise the visual magnitude of city sites (monuments and buildings, tourist sites) – for example in urban planning, as an aid to automated way finding, or in augmented reality city guides. Here we present the outputs of an algorithm able to calculate visual magnitude – both as an absolute measure of the façade area, and in terms of a building’s perceived magnitude (its lesser importance with distance). Both metrics influence the photogenic nature of a site. We therefore compared against maps showing the locations from where geo-located FlickR images were taken.  The results accord with the metrics and therefore help disambiguate the meaning  of FlickR tags

    The REAL corpus: A crowd-sourced Corpus of human generated and evaluated spatial references to real-world urban scenes

    Get PDF
    We present a newly crowd-sourced data set of natural language references to objects anchored in complex urban scenes (In short: The REAL Corpus – Referring Expressions Anchored Language). The REAL corpus contains a collection of images of real-world urban scenes together with verbal descriptions of target objects generated by humans, paired with data on how successful other people were able to identify the same object based on these descriptions. In total, the corpus contains 32 images with on average 27 descriptions per image and 3 verifications for each description. In addition, the corpus is annotated with a variety of linguistically motivated features. The paper highlights issues posed by collecting data using crowd-sourcing with an unrestricted input format, as well as using real-world urban scenes. The corpus will be released via the ELRA repository as part of this submission

    Collaboration on an Ontology for Generalisation

    Get PDF
    workshopInternational audienceTo move beyond the current plateau in automated cartography we need greater sophistication in the process of selecting generalisation algorithms. This is particularly so in the context of machine comprehension. We also need to build on existing algorithm development instead of duplication. More broadly we need to model the geographical context that drives the selection, sequencing and degree of application of generalisation algorithms. We argue that a collaborative effort is required to create and share an ontology for cartographic generalisation focused on supporting the algorithm selection process. The benefits of developing a collective ontology will be the increased sharing of algorithms and support for on-demand mapping and generalisation web services
    corecore