16,896 research outputs found

    Evaluating existing manually constructed natural landscape classification with a machine learning-based approach

    Get PDF
    Some landscape classifications officially determine financial obligations; thus, they must be objective and precise. We presume it is possible to quantitatively evaluate existing manually constructed classifications and correct them if necessary. One option for achieving this goal is a machine learning method. With (re)modeling of the landscape classification and an explanation of its structure, we can add quantitative proof to its original (qualitative) description. The main objectives of the paper are to evaluate the consistency of the existing manually constructed natural landscape classification with a machine learning-based approach and to test the newly developed general black-box explanation method in order to explain variable importance for the differentiation between natural landscape types. The approach consists of training a model of the existing classification and a general method for explaining variable importance. As an example, we evaluated the existing natural landscape classification of Slovenia from 1998, which is still officially used in the agricultural taxation process. Our results showed that the modeled classification confirms the original with a high rate of agreement--94%. The complementary map of classification uncertainty (entropy) gave us more information on the areas where the classification should be checked, and the analysis of the variable importance provided insight into the differentiation between types. Although the selection of the exclusively climatic variables seemed unusual at first, we were able to understand the computer\u27s logic and support geographical explanations for the model. We conclude that the approach can enhance the explanation and evaluation of natural landscape classifications and can be transparently transferred to other areas

    The TREC-2002 video track report

    Get PDF
    TREC-2002 saw the second running of the Video Track, the goal of which was to promote progress in content-based retrieval from digital video via open, metrics-based evaluation. The track used 73.3 hours of publicly available digital video (in MPEG-1/VCD format) downloaded by the participants directly from the Internet Archive (Prelinger Archives) (internetarchive, 2002) and some from the Open Video Project (Marchionini, 2001). The material comprised advertising, educational, industrial, and amateur films produced between the 1930's and the 1970's by corporations, nonprofit organizations, trade associations, community and interest groups, educational institutions, and individuals. 17 teams representing 5 companies and 12 universities - 4 from Asia, 9 from Europe, and 4 from the US - participated in one or more of three tasks in the 2001 video track: shot boundary determination, feature extraction, and search (manual or interactive). Results were scored by NIST using manually created truth data for shot boundary determination and manual assessment of feature extraction and search results. This paper is an introduction to, and an overview of, the track framework - the tasks, data, and measures - the approaches taken by the participating groups, the results, and issues regrading the evaluation. For detailed information about the approaches and results, the reader should see the various site reports in the final workshop proceedings

    Conflating point of interest (POI) data: A systematic review of matching methods

    Full text link
    Point of interest (POI) data provide digital representations of places in the real world, and have been increasingly used to understand human-place interactions, support urban management, and build smart cities. Many POI datasets have been developed, which often have different geographic coverages, attribute focuses, and data quality. From time to time, researchers may need to conflate two or more POI datasets in order to build a better representation of the places in the study areas. While various POI conflation methods have been developed, there lacks a systematic review, and consequently, it is difficult for researchers new to POI conflation to quickly grasp and use these existing methods. This paper fills such a gap. Following the protocol of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), we conduct a systematic review by searching through three bibliographic databases using reproducible syntax to identify related studies. We then focus on a main step of POI conflation, i.e., POI matching, and systematically summarize and categorize the identified methods. Current limitations and future opportunities are discussed afterwards. We hope that this review can provide some guidance for researchers interested in conflating POI datasets for their research
    corecore