503 research outputs found

    Enabling post-recording deep georeferencing of walkthrough videos : an interactive apprach

    Get PDF
    Dissertation submitted in partial fulfilment of the requirements for the degree of Master of Science in Geospatial TechnologiesThe usage of large scale databases of georeferenced video stream has an in nite number of applications in industry and research since e cient storing in geodatabases to allowing the performing of spatial queries. Due to the fact that the video capturing devices have become ubiquitous, a good source for the acquisition of a lot of video contents is the crowdsourced approach of Social Media. However, these social apps usually do not support geo metadata or it is very limited to a single location on Earth. In other cases, the regular user usually does not have the required hardware and software to capture video footage with a deep georeference (position and orientation in time). There is a clear lack of methods for the extraction of that spatial component in video footage. This study proposes and evaluates a new method for the manual capture and extraction of the spatial geo-reference in the post production phase of video content. The proposed framework is based on a map-based user interface synchronized with the video stream. The e ciency and usability of the resulting framework were evaluated performing a user study, in addition, the resulting geo-metadata of the manual extracted georeference has been compared with the one previously captured by hardware in order to evaluate the goodness of the method

    Multifaceted Geotagging for Streaming News

    Get PDF
    News sources on the Web generate constant streams of information, describing the events that shape our world. In particular, geography plays a key role in the news, and understanding the geographic information present in news allows for its useful spatial browsing and retrieval. This process of understanding is called geotagging, and involves first finding in the document all textual references to geographic locations, known as toponyms, and second, assigning the correct lat/long values to each toponym, steps which are termed toponym recognition and toponym resolution, respectively. These steps are difficult due to ambiguities in natural language: some toponyms share names with non-location entities, and further, a given toponym can have many location interpretations. Removing these ambiguities is crucial for successful geotagging. To this end, geotagging methods are described which were developed for streaming news. First, a spatio-textual search engine named STEWARD, and an interactive map-based news browsing system named NewsStand are described, which feature geotaggers as central components, and served as motivating systems and experimental testbeds for developing geotagging methods. Next, a geotagging methodology is presented that follows a multifaceted approach involving a variety of techniques. First, a multifaceted toponym recognition process is described that uses both rule-based and machine learning–based methods to ensure high toponym recall. Next, various forms of toponym resolution evidence are explored. One such type of evidence is lists of toponyms, termed comma groups, whose toponyms share a common thread in their geographic properties that enables correct resolution. In addition to explicit evidence, authors take advantage of the implicit geographic knowledge of their audiences. Understanding the local places known by an audience, termed its local lexicon, affords great performance gains when geotagging articles from local newspapers, which account for the vast majority of news on the Web. Finally, considering windows of text of varying size around each toponym, termed adaptive context, allows for a tradeoff between geotagging execution speed and toponym resolution accuracy. Extensive experimental evaluations of all the above methods, using existing and two newly-created, large corpora of streaming news, show great performance gains over several competing prominent geotagging methods

    Introduction to the second international symposium of platial information science

    Get PDF
    People ‘live’ and constitute places every day through recurrent practices and experience. Our everyday lives, however, are complex, and so are places. In contrast to abstract space, the way people experience places includes a range of aspects like physical setting, meaning, and emotional attachment. This inherent complexity requires researchers to investigate the concept of place from a variety of viewpoints. The formal representation of place – a major goal in GIScience related to place – is no exception and can only be successfully addressed if we consider geographical, psychological, anthropological, sociological, cognitive, and other perspectives. This year’s symposium brings together place-based researchers from different disciplines to discuss the current state of platial research. Therefore, this volume contains contributions from a range of fields including geography, psychology, cognitive science, linguistics, and cartography

    Towards an Extensible Expert-Sourcing Platform

    Get PDF
    University of Minnesota Ph.D. dissertation.May 2019. Major: Computer Science. Advisor: Mohamed Mokbel. 1 computer file (PDF); viii, 106 pages.In recent years, general purpose crowdsourcing platforms, e.g., Amazon Mechanical Turk, Figure Eight, and ChinaCrowds, have been gaining a lot of popularity due to their capability in solving tasks that are still difficult for machines or computers to solve, e.g., labeling data, sorting images, computing skyline over noisy data, and sentiment analysis. Unfortunately, current crowdsourcing platforms are lacking a very important feature that is desired by many of the recent crowdsourcing applications, namely, recruiting workers that are expert at a given task. Being able to recruit expert workers will allow those applications to not only achieve a more accurate results but also higher quality results than recruiting general crowd for the applications. We call such crowdsourcing process as expert-sourcing, i.e., outsourcing tasks to experts. Without having any platforms to support them, developers of each expert-sourcing application needs to build the whole crowdsourcing system stack from scratch while, in fact, those systems share many common components with each other. This thesis proposes Luna; the first extensible expert-sourcing platform. To instantiate a new expert-sourcing application out of Luna, one only needs to provide a few simple plug-ins that will be integrated with the core components of Luna to provide the expert-sourcing platform for the new application. This is possible due to the fact that Luna is able to identify the components that can be shared among many expert-sourcing applications and the components that need to be tailored for a specific application. In this thesis, we show the extensibility of Luna by instantiating six different expert-sourcing applications that are currently not well supported by the general purpose crowdsourcing platforms. Experimental evaluation with real crowdsourcing deployment as well as by using real dataset shows that Luna is able to achieve not only more accurate but also better quality results than existing general purpose crowdsourcing platforms in supporting expert-sourcing applications. Lastly, we also provide a more specialized expert-sourcing platform for image geotagging application that is initially deemed unfit to be solved by crowdsourcing

    The principle of geotagging : cross-linking archival sources with people and the city through digital urban places

    Get PDF
    This article discusses technical solutions for representing archival sources in urban areas. We strive to realise the interconnectedness of sources, its beholders and the concerning entity through the location where the information was recorded the first time. This will be exemplified by a recent project at the University of Graz. Thereto, we need to identify problems in the analogue world mainly dealing with the classification of archiving, semiotic systems, descriptions and assignments. We use existing mobile technologies and software applications from different application fields and test their suitability for our concern. Comparing and transferring analogue methods to the digital world is a real challenge we like to accept when it comes to solving identified problems that arise in the context of modes of practice in archives and web representations.Funded by the Horizon 2020 Framework Programme of the European Union.peer-reviewe

    A Transformer-based Framework for POI-level Social Post Geolocation

    Full text link
    POI-level geo-information of social posts is critical to many location-based applications and services. However, the multi-modality, complexity and diverse nature of social media data and their platforms limit the performance of inferring such fine-grained locations and their subsequent applications. To address this issue, we present a transformer-based general framework, which builds upon pre-trained language models and considers non-textual data, for social post geolocation at the POI level. To this end, inputs are categorized to handle different social data, and an optimal combination strategy is provided for feature representations. Moreover, a uniform representation of hierarchy is proposed to learn temporal information, and a concatenated version of encodings is employed to capture feature-wise positions better. Experimental results on various social datasets demonstrate that three variants of our proposed framework outperform multiple state-of-art baselines by a large margin in terms of accuracy and distance error metrics.Comment: Full papers are 12 pages in length plus additional 4 pages for references (turns to 18 pages in total after submitting to arxiv). One figure and 5 tables are contained. This paper was submitted to ECIR 2023 for revie

    Does \u2018bigger\u2019mean \u2018better\u2019? Pitfalls and shortcuts associated with big data for social research

    Get PDF
    \u2018Big data is here to stay.\u2019 This key statement has a double value: is an assumption as well as the reason why a theoretical reflection is needed. Furthermore, Big data is something that is gaining visibility and success in social sciences even, overcoming the division between humanities and computer sciences. In this contribution some considerations on the presence and the certain persistence of Big data as a socio-technical assemblage will be outlined. Therefore, the intriguing opportunities for social research linked to such interaction between practices and technological development will be developed. However, despite a promissory rhetoric, fostered by several scholars since the birth of Big data as a labelled concept, some risks are just around the corner. The claims for the methodological power of bigger and bigger datasets, as well as increasing speed in analysis and data collection, are creating a real hype in social research. Peculiar attention is needed in order to avoid some pitfalls. These risks will be analysed for what concerns the validity of the research results \u2018obtained through Big data. After a pars distruens, this contribution will conclude with a pars construens; assuming the previous critiques, a mixed methods research design approach will be described as a general proposal with the objective of stimulating a debate on the integration of Big data in complex research projecting

    Kartta Labs: Collaborative Time Travel

    Get PDF
    We introduce the modular and scalable design of Kartta Labs, an open source, open data, and scalable system for virtually reconstructing cities from historical maps and photos. Kartta Labs relies on crowdsourcing and artificial intelligence consisting of two major modules: Maps and 3D models. Each module, in turn, consists of sub-modules that enable the system to reconstruct a city from historical maps and photos. The result is a spatiotemporal reference that can be used to integrate various collected data (curated, sensed, or crowdsourced) for research, education, and entertainment purposes. The system empowers the users to experience collaborative time travel such that they work together to reconstruct the past and experience it on an open source and open data platform

    Improving the Geotagging Accuracy of Street-level Images

    Get PDF
    Integrating images taken at street-level with satellite imagery is becoming increasingly valuable in the decision-making processes not only for individuals, but also in business and governmental sectors. To perform this integration, images taken at street-level need to be accurately georeferenced. This georeference information can be derived from a global positioning system (GPS). However, GPS data is prone to errors up to 15 meters, and needs to be corrected for the purpose of geo-referencing. In this thesis, an automatic method is proposed for correcting the georeference information obtained from the GPS data, based on image registration techniques. The proposed method uses an optimization technique to find local optimal solutions by matching high-level features and their relative locations. A global optimization method is then employed over all of the local solutions by applying a geometric constraint. The main contribution of this thesis is introducing a new direction for correcting the GPS data which is more economical and more consistent compared to existing manual method. Other than high cost (labor and management), the main concern with manual correction is the low degree of consistency between different human operators. Our proposed automatic software-based method is a solution for these drawbacks. Other contributions can be listed as (1) modified Chamfer matching (CM) cost function which improves the accuracy of standard CM for images with various misleading/disturbing edges; (2) Monte-Carlo-inspired statistical analysis which made it possible to quantify the overall performance of the proposed algorithm; (3) Novel similarity measure for applying normalized cross correlation (NCC) technique on multi-level thresholded images, which is used to compare multi-modal images more accurately as compared to standard application of NCC on raw images. (4) Casting the problem of selecting an optimal global solution among set of local minima into a problem of finding an optimal path in a graph using Dijkstra\u27s algorithm. We used our algorithm for correcting the georeference information of 20 chains containing more than 7000 fisheye images and our experimental results show that the proposed algorithm can achieve an average error of 2 meters, which is acceptable for most of applications
    corecore