117 research outputs found

    An effective and efficient approach for manually improving geocoded data

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The process of geocoding produces output coordinates of varying degrees of quality. Previous studies have revealed that simply excluding records with low-quality geocodes from analysis can introduce significant bias, but depending on the number and severity of the inaccuracies, their inclusion may also lead to bias. Little quantitative research has been presented on the cost and/or effectiveness of correcting geocodes through manual interactive processes, so the most cost effective methods for improving geocoded data are unclear. The present work investigates the time and effort required to correct geocodes contained in five health-related datasets that represent examples of data commonly used in Health GIS.</p> <p>Results</p> <p>Geocode correction was attempted on five health-related datasets containing a total of 22,317 records. The complete processing of these data took 11.4 weeks (427 hours), averaging 69 seconds of processing time per record. Overall, the geocodes associated with 12,280 (55%) of records were successfully improved, taking 95 seconds of processing time per corrected record on average across all five datasets. Geocode correction improved the overall match rate (the number of successful matches out of the total attempted) from 79.3 to 95%. The spatial shift between the location of original successfully matched geocodes and their corrected improved counterparts averaged 9.9 km per corrected record. After geocode correction the number of city and USPS ZIP code accuracy geocodes were reduced from 10,959 and 1,031 to 6,284 and 200, respectively, while the number of building centroid accuracy geocodes increased from 0 to 2,261.</p> <p>Conclusion</p> <p>The results indicate that manual geocode correction using a web-based interactive approach is a feasible and cost effective method for improving the quality of geocoded data. The level of effort required varies depending on the type of data geocoded. These results can be used to choose between data improvement options (e.g., manual intervention, pseudocoding/geo-imputation, field GPS readings).</p

    Archaeology and Language: The Indo-Iranians

    Get PDF
    This review of recent archaeological work in Central Asia and Eurasia attempts to trace and date the movements of the IndoIraniansspeakers of languages of the eastern branch of ProtoIndoEuropean that later split into the Iranian and Vedic families. Russian and Central Asian scholars working on the contemporary but very different Andronovo and Bactrian Margiana archaeological complexes of the 2d millennium b.c. have identified both as IndoIranian, and particular sites so identified are being used for nationalist purposes. There is, however, no compelling archaeological evidence that they had a common ancestor or that either is IndoIranian. Ethnicity and language are not easily linked with an archaeological signature, and the identity of the IndoIranians remains elusive

    An ecosystem for linked humanities data

    Get PDF
    The main promise of the digital humanities is the ability to perform scholar studies at a much broader scale, and in a much more reusable fashion. The key enabler for such studies is the availability of suciently well described data. For the eld of socio-economic history, data usually comes in a tabular form. Existing eorts to curate and publish datasets take a top-down approach and are focused on large collections. This paper presents QBer and the underlying structured data hub, which address the long tail of research data by catering for the needs of individual scholars. QBer allows researchers to publish their (small) datasets, link them to existing vocabularies and other datasets, and thereby contribute to a growing collection of interlinked datasets.We present QBer, and evaluate our rst results by showing how our system facilitates two use cases in socio-economic history

    An architecture for establishing legal semantic workflows in the context of integrated law enforcement

    Get PDF
    A previous version of this paper was presented at the Third Workshop on Legal Knowledge and the Semantic Web (LK&SW-2016), EKAW-2016, November 19th, Bologna, ItalyTraditionally the integration of data from multiple sources is done on an ad-hoc basis for each to "silos" that prevent sharing data across different agencies or tasks, and is unable to cope with the modern environment, where workflows, tasks, and priorities frequently change. Operating within the Data to Decision Cooperative Research Centre (D2D CRC), the authors are currently involved in the Integrated Law Enforcement Project, which has the goal of developing a federated data platform that will enable the execution of integrated analytics on data accessed from different external and internal sources, thereby providing effective support to an investigator or analyst working to evaluate evidence and manage lines of inquiries in the investigation. Technical solutions should also operate ethically, in compliance with the law, and subject to good governance principles

    Uso de grĆ£os de cereais de inverno na suplementaĆ§Ć£o de ruminantes em sistemas de integraĆ§Ć£o lavoura-pecuĆ”ria (ILP).

    Get PDF
    Este artigo tem como objetivo abordar a utilizaĆ§Ć£o dos grĆ£os de cereais de inverno, na alimentaĆ§Ć£o animal, bem como as diferentes formas de conservĆ”-los, seja na forma seca, via fenaĆ§Ć£o ou na forma Ćŗmida, via ensilagem

    The lonely teacher /

    No full text
    Comprend des bibliogr.Index: p. 151-15
    • ā€¦
    corecore