60,435 research outputs found

    TALP-UPC at MediaEval 2014 Placing Task: Combining geographical knowledge bases and language models for large-scale textual georeferencing

    Get PDF
    This paper describes our Georeferencing approaches, experiments, and results at the MediaEval 2014 Placing Task evaluation. The task consists of predicting the most probable geographical coordinates of Flickr images and videos using its visual, audio and metadata associated features. Our approaches used only Flickr users textual metadata annotations and tagsets. We used four approaches for this task: 1) an approach based on Geographical Knowledge Bases (GeoKB), 2) the Hiemstra Language Model (HLM) approach with Re-Ranking, 3) a combination of the GeoKB and the HLM (GeoFusion). 4) a combination of the GeoFusion with a HLM model derived from the English Wikipedia georeferenced pages. The HLM approach with Re-Ranking showed the best performance within 10m to 1km distances. The GeoFusion approaches achieved the best results within the margin of errors from 10km to 5000km. This work has been supported by the Spanish Research Department (SKATER Project: TIN2012-38584-C06-01). TALP Research Center is recognized as a Quality Research Group (2014 SGR 1338) by AGAUR, the Research Department of the Catalan Government.Peer ReviewedPostprint (published version

    Estimating mobility of tourists. New Twitter-based procedure

    Get PDF
    Twitter has been actively researched as a human mobility proxy. Tweets can contain two classes of geographical metadata: the location from which a tweet was published, and the place where the tweet is estimated to have been published. Nevertheless, Twitter also presents tweets without any geographical metadata when querying for tweets on a specific location. This study presents a methodology which includes an algorithm for estimating the geographical coordinates to tweets for which Twitter doesn't assign any. Our objective is to determine the origin and the route that a tourist followed, even if Twitter doesn't return geographically identified data. This is carried out through geographical searches of tweets inside a defined area. Once a tweet is found inside an area, but its metadata contains no explicit geographical coordinates, its coordinates are estimated by iteratively performing geographical searches, with a decreasing geographical searching radius. This algorithm was tested in two touristic villages of Madrid (Spain) and a major city in Canada. A set of tweets without geographical coordinates in these areas were found and processed. The coordinates of a subset of them were successfully estimated.Agencia Estatal de InvestigaciĂłn | Ref. PID2020-116040RB-I00Universidade de Vigo/CISU

    Identifying the Geographic Location of an Image with a Multimodal Probability Density Function

    No full text
    There is a wide array of online photographic content that is not geotagged. Algorithms for efficient and accurate geographical estimation of an image are needed to geolocate these photos. This paper presents a general model for using both textual metadata and visual features of photos to automatically place them on a world map

    How INSPIREd is NERC?

    Get PDF
    The Natural Environment Research Council (www.nerc.ac.uk) is the UK's main agency for funding and managing research, training and knowledge exchange in the environmental sciences. In 2007 NERC commissioned a consultancy to prepare an INSPIRE baseline and Road Map to enable it to be compliant with the EU INSPIRE Directive well ahead of the deadlines listed in the Directive. This study provided: ‱ A baseline of INSPIRE readiness across NERC with respect to INSPIRE requirements for metadata, discovery, view and download services; ‱ A description of what NERC will need to provide to fully comply with the INSPIRE Directive; ‱ A description of the technology options that are currently envisaged to implement the INSPIRE Directive; ‱ A Road Map to show what NERC must do to meet the INSPIRE Directive; ‱ An estimate of resources required to implement the INSPIRE Directive. This paper outlines the findings of this study

    MSUO Information Technology and Geographical Information Systems: Common Protocols & Procedures. Report to the Marine Safety Umbrella Operation

    Get PDF
    The Marine Safety Umbrella Operation (MSUO) facilitates the cooperation between Interreg funded Marine Safety Projects and maritime stakeholders. The main aim of MSUO is to permit efficient operation of new projects through Project Cooperation Initiatives, these include the review of the common protocols and procedures for Information Technology (IT) and Geographical Information Systems (GIS). This study carried out by CSA Group and the National Centre for Geocomputation (NCG) reviews current spatial information standards in Europe and the data management methodologies associated with different marine safety projects. International best practice was reviewed based on the combined experience of spatial data research at NCG and initiatives in the US, Canada and the UK relating to marine security service information and acquisition and integration of large marine datasets for ocean management purposes. This report identifies the most appropriate international data management practices that could be adopted for future MSUO projects

    A geo-temporal information extraction service for processing descriptive metadata in digital libraries

    Get PDF
    In the context of digital map libraries, resources are usually described according to metadata records that define the relevant subject, location, time-span, format and keywords. On what concerns locations and time-spans, metadata records are often incomplete or they provide information in a way that is not machine-understandable (e.g. textual descriptions). This paper presents techniques for extracting geotemporal information from text, using relatively simple text mining methods that leverage on a Web gazetteer service. The idea is to go from human-made geotemporal referencing (i.e. using place and period names in textual expressions) into geo-spatial coordinates and time-spans. A prototype system, implementing the proposed methods, is described in detail. Experimental results demonstrate the efficiency and accuracy of the proposed approaches

    Spatial information retrieval and geographical ontologies: an overview of the SPIRIT project

    Get PDF
    A large proportion of the resources available on the world-wide web refer to information that may be regarded as geographically located. Thus most activities and enterprises take place in one or more places on the Earth's surface and there is a wealth of survey data, images, maps and reports that relate to specific places or regions. Despite the prevalence of geographical context, existing web search facilities are poorly adapted to help people find information that relates to a particular location. When the name of a place is typed into a typical search engine, web pages that include that name in their text will be retrieved, but it is likely that many resources that are also associated with the place may not be retrieved. Thus resources relating to places that are inside the specified place may not be found, nor may be places that are nearby or that are equivalent but referred to by another name. Specification of geographical context frequently requires the use of spatial relationships concerning distance or containment for example, yet such terminology cannot be understood by existing search engines. Here we provide a brief survey of existing facilities for geographical information retrieval on the web, before describing a set of tools and techniques that are being developed in the project SPIRIT : Spatially-Aware Information Retrieval on the Internet (funded by European Commission Framework V Project IST-2001-35047)

    Bridging the Gap Between Traditional Metadata and the Requirements of an Academic SDI for Interdisciplinary Research

    Get PDF
    Metadata has long been understood as a fundamental component of any Spatial Data Infrastructure, providing information relating to discovery, evaluation and use of datasets and describing their quality. Having good metadata about a dataset is fundamental to using it correctly and to understanding the implications of issues such as missing data or incorrect attribution on the results obtained for any analysis carried out. Traditionally, spatial data was created by expert users (e.g. national mapping agencies), who created metadata for the data. Increasingly, however, data used in spatial analysis comes from multiple sources and could be captured or used by nonexpert users – for example academic researchers ‐ many of whom are from non‐GIS disciplinary backgrounds, not familiar with metadata and perhaps working in geographically dispersed teams. This paper examines the applicability of metadata in this academic context, using a multi‐national coastal/environmental project as a case study. The work to date highlights a number of suggestions for good practice, issues and research questions relevant to Academic SDI, particularly given the increased levels of research data sharing and reuse required by UK and EU funders
    • 

    corecore