97 research outputs found

    Discovering the spatial coverage of the documents through the SpatialCIM Methodology.

    Get PDF
    The main focus of this paper is to present the SpatialCIM methodology to identify the spatial coverage of the documents in the Brazilian geographic area. This methodology uses a linguistic tool to assist in the entity recognition process. The linguistic tool classifies the recognized entities as person, organization, time and localization, among others. The localization entities are checked using a geographic information system (GIS) in order to extract the Brazilian entity geographic paths. If there are multiple geographic paths for a single entity, the disambiguation process is carried out. This process attempts to locate the best geographic path for an entity considering all the geographic entities in the text. Another important objective of this paper is to show that the disambiguation process improves the geographic classification of the documents considering the obtained geographic paths. The validation process considers a set of news previously labeled by an expert and compared with the results of the disambiguated and non-disambiguated geographic paths. The results showed that the disambiguation process improves the classification compared with the classification without disambiguation. Keywords: Ambiguity problem resolution, spatial coverage identification, toponym resolution

    The SpatialCIM methodology for spatial document coverage disambiguation and the entity recognition process aided by linguistic techniques.

    Get PDF
    Abstract. Nowadays it is becoming more usual for users to take into account the geographical localization of the documents in the retrieval information process. However, the conventional retrieval information systems based on key-word matching do not consider which words can represent geographical entities that are spatially related to other entities in the document. This paper presents the SpatialCIM methodology, which is based on three steps: pre-processing, data expansion and disambiguation. In the pre-processing step, the entity recognition process is carried out with the support of the Rembrandt tool. Additionally, a comparison between the performances regarding the discovery of the location entities in the texts of the Rembrandt tool against the use of a controlled vocabulary corresponding to the Brazilian geographic locations are presented. For the comparison a set of geographic labeled news covering the sugar cane culture in the Portuguese language is used. The results showed a F-measure value increase for the Rembrandt tool from 45% in the non-disambiguated process to 0.50 after disambiguation and from 35% to 38% using the controlled vocabulary. Additionally, the results showed the Rembrandt tool has a minimal amplitude difference between precision and recall, although the controlled vocabulary has always the biggest recall values.GeoDoc 2012, PAKDD 2012

    A survey on the geographic scope of textual documents.

    Get PDF
    Recognizing references to places in texts is needed in many applications, such assearch engines,loca- tion-based social media and document classification. In this paper we present a survey of methods and techniques for there cognition and identification of places referenced in texts. We discuss concept sand terminology, and propose a classification of the solutions given in the literature. We introduce a definition of the Geographic Scope Resolution (GSR) problem, dividing it in three steps: geoparsing, reference resolution, and grounding references. Solutions to the first two steps are organized according to the method used, and solutions to the third step are organized according to the type of out put produced. We found that it is difficult to compare existing solutions directly to one another, because they of ten create their own bench marking data, targeted to their own problem

    Geospatial database generation from digital newspapers: use case for risk and disaster domains.

    Get PDF
    Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies.The generation of geospatial databases is expensive in terms of time and money. Many geospatial users still lack spatial data. Geographic Information Extraction and Retrieval systems can alleviate this problem. This work proposes a method to populate spatial databases automatically from the Web. It applies the approach to the risk and disaster domain taking digital newspapers as a data source. News stories on digital newspapers contain rich thematic information that can be attached to places. The use case of automating spatial database generation is applied to Mexico using placenames. In Mexico, small and medium disasters occur most years. The facts about these are frequently mentioned in newspapers but rarely stored as records in national databases. Therefore, it is difficult to estimate human and material losses of those events. This work present two ways to extract information from digital news using natural languages techniques for distilling the text, and the national gazetteer codes to achieve placename-attribute disambiguation. Two outputs are presented; a general one that exposes highly relevant news, and another that attaches attributes of interest to placenames. The later achieved a 75% rate of thematic relevance under qualitative analysis

    Enhancing Road Infrastructure Monitoring: Integrating Drones for Weather-Aware Pothole Detection

    Get PDF
    The abstract outlines the research proposal focused on the utilization of Unmanned Aerial Vehicles (UAVs) for monitoring potholes in road infrastructure affected by various weather conditions. The study aims to investigate how different materials used to fill potholes, such as water, grass, sand, and snow-ice, are impacted by seasonal weather changes, ultimately affecting the performance of pavement structures. By integrating weather-aware monitoring techniques, the research seeks to enhance the rigidity and resilience of road surfaces, thereby contributing to more effective pavement management systems. The proposed methodology involves UAV image-based monitoring combined with advanced super-resolution algorithms to improve image refinement, particularly at high flight altitudes. Through case studies and experimental analysis, the study aims to assess the geometric precision of 3D models generated from aerial images, with a specific focus on road pavement distress monitoring. Overall, the research aims to address the challenges of traditional road failure detection methods by exploring cost-effective 3D detection techniques using UAV technology, thereby ensuring safer roadways for all users

    Comparative Study of GIS and Conventional Household Survey Sampling Methods: Feasibility, Cost and Family Planning Coverage Estimates

    Get PDF
    Background Household surveys serve as the main source of data on reproductive, maternal, and child health in low and middle-income countries (LMICs). Considering their significant role, ensuring production of high-quality data is imperative. However, the high costs associated with conducting large-scale surveys in LMICs has led to a search for alternative survey sampling methods. This study compared two probability sampling methods: geographic information system (GIS) and conventional sampling. It assessed feasibility of GIS sampling, evaluated equivalence of sampling methods for selected family planning (FP) coverage indicators, and compared implementation costs. Methods Concurrent cross-sectional surveys using the two sampling methods were implemented in the same 150 clusters in Burkina Faso. For GIS method, free satellite images were used to digitize cluster boundaries and potentially residential structures. Feasibility was assessed using embedded mixed methods. Equivalence threshold (+/- 5 percentage points) to compare FP indicators was defined using confidence interval (CI) approach. Costs were estimated using micro-costing from international donor’s perspective. Average and incremental costs-per-cluster and costs-per-completed-interview were calculated. Results In conventional method, 14,610 households were enumerated; 3,021 households sampled. In GIS method, 58,120 structures were digitized; 3,371 households sampled. There was no statistically significant difference in the survey response rates for occupied dwellings among the two sampling methods (p=0.089). Qualitative results documented the advantages and challenges experienced during implementation of GIS method. Of the 9,907 eligible women selected, 4,370 were in conventional method, 3,913 in GIS and 1,624 in both methods. The CIs of sociodemographic variables and FP indicators overlapped across both methods. Sampling methods yielded equivalent estimates of modern contraceptive prevalence and unmet need for FP. Cost difference between the methods was 43,529.Relativetoconventionalmethod,GISmethodwas1543,529. Relative to conventional method, GIS method was 15% less expensive. Compared to conventional sampling, GIS sampling cost 266 and 314lesspercluster,and314 less per cluster, and 13 and $4 less per completed interview, in the urban and rural areas, respectively. Conclusion Using GIS for large-scale, probability-based household surveys is feasible in both urban and rural settings, if recent, high-resolution satellite images are available. It should be considered a valid alternative for deriving unbiased population coverage estimates in resource-constrained settings

    Applications of Internet of Things

    Get PDF
    This book introduces the Special Issue entitled “Applications of Internet of Things”, of ISPRS International Journal of Geo-Information. Topics covered in this issue include three main parts: (I) intelligent transportation systems (ITSs), (II) location-based services (LBSs), and (III) sensing techniques and applications. Three papers on ITSs are as follows: (1) “Vehicle positioning and speed estimation based on cellular network signals for urban roads,” by Lai and Kuo; (2) “A method for traffic congestion clustering judgment based on grey relational analysis,” by Zhang et al.; and (3) “Smartphone-based pedestrian’s avoidance behavior recognition towards opportunistic road anomaly detection,” by Ishikawa and Fujinami. Three papers on LBSs are as follows: (1) “A high-efficiency method of mobile positioning based on commercial vehicle operation data,” by Chen et al.; (2) “Efficient location privacy-preserving k-anonymity method based on the credible chain,” by Wang et al.; and (3) “Proximity-based asynchronous messaging platform for location-based Internet of things service,” by Gon Jo et al. Two papers on sensing techniques and applications are as follows: (1) “Detection of electronic anklet wearers’ groupings throughout telematics monitoring,” by Machado et al.; and (2) “Camera coverage estimation based on multistage grid subdivision,” by Wang et al
    • …
    corecore