41,977 research outputs found

    The structure of verbal sequences analyzed with unsupervised learning techniques

    Full text link
    Data mining allows the exploration of sequences of phenomena, whereas one usually tends to focus on isolated phenomena or on the relation between two phenomena. It offers invaluable tools for theoretical analyses and exploration of the structure of sentences, texts, dialogues, and speech. We report here the results of an attempt at using it for inspecting sequences of verbs from French accounts of road accidents. This analysis comes from an original approach of unsupervised training allowing the discovery of the structure of sequential data. The entries of the analyzer were only made of the verbs appearing in the sentences. It provided a classification of the links between two successive verbs into four distinct clusters, allowing thus text segmentation. We give here an interpretation of these clusters by applying a statistical analysis to independent semantic annotations

    Analyzing the Tagging Quality of the Spanish OpenStreetMap

    Get PDF
    In this paper, a framework for the assessment of the quality of OpenStreetMap is presented, comprising a batch of methods to analyze the quality of entity tagging. The approach uses Taginfo as a reference base and analyses quality measures such as completeness, compliance, consistence, granularity, richness and trust . The framework has been used to analyze the quality of OpenStreetMap in Spain, comparing the main cities of Spain. Also a comparison between Spain and some major European cities has been carried out. Additionally, a Web tool has been also developed in order to facilitate the same kind of analysis in any area of the world

    The Latent Relation Mapping Engine: Algorithm and Experiments

    Full text link
    Many AI researchers and cognitive scientists have argued that analogy is the core of cognition. The most influential work on computational modeling of analogy-making is Structure Mapping Theory (SMT) and its implementation in the Structure Mapping Engine (SME). A limitation of SME is the requirement for complex hand-coded representations. We introduce the Latent Relation Mapping Engine (LRME), which combines ideas from SME and Latent Relational Analysis (LRA) in order to remove the requirement for hand-coded representations. LRME builds analogical mappings between lists of words, using a large corpus of raw text to automatically discover the semantic relations among the words. We evaluate LRME on a set of twenty analogical mapping problems, ten based on scientific analogies and ten based on common metaphors. LRME achieves human-level performance on the twenty problems. We compare LRME with a variety of alternative approaches and find that they are not able to reach the same level of performance.Comment: related work available at http://purl.org/peter.turney

    Discriminating small wooded elements in rural landscape from aerial photography: a hybrid pixel/object-based analysis approach

    Get PDF
    While small, fragmented wooded elements do not represent a large surface area in agricultural landscape, their role in the sustainability of ecological processes is recognized widely. Unfortunately, landscape ecology studies suffer from the lack of methods for automatic detection of these elements. We propose a hybrid approach using both aerial photographs and ancillary data of coarser resolution to automatically discriminate small wooded elements. First, a spectral and textural analysis is performed to identify all the planted-tree areas in the digital photograph. Secondly, an object-orientated spatial analysis using the two data sources and including a multi-resolution segmentation is applied to distinguish between large and small woods, copses, hedgerows and scattered trees. The results show the usefulness of the hybrid approach and the prospects for future ecological applications

    Towards a Generic Trace for Rule Based Constraint Reasoning

    Get PDF
    CHR is a very versatile programming language that allows programmers to declaratively specify constraint solvers. An important part of the development of such solvers is in their testing and debugging phases. Current CHR implementations support those phases by offering tracing facilities with limited information. In this report, we propose a new trace for CHR which contains enough information to analyze any aspects of \CHRv\ execution at some useful abstract level, common to several implementations. %a large family of rule based solvers. This approach is based on the idea of generic trace. Such a trace is formally defined as an extension of the ωr∨\omega_r^\lor semantics of CHR. We show that it can be derived form the SWI Prolog CHR trace

    Human-Level Performance on Word Analogy Questions by Latent Relational Analysis

    Get PDF
    This paper introduces Latent Relational Analysis (LRA), a method for measuring relational similarity. LRA has potential applications in many areas, including information extraction, word sense disambiguation, machine translation, and information retrieval. Relational similarity is correspondence between relations, in contrast with attributional similarity, which is correspondence between attributes. When two words have a high degree of attributional similarity, we call them synonyms. When two pairs of words have a high degree of relational similarity, we say that their relations are analogous. For example, the word pair mason/stone is analogous to the pair carpenter/wood; the relations between mason and stone are highly similar to the relations between carpenter and wood. Past work on semantic similarity measures has mainly been concerned with attributional similarity. For instance, Latent Semantic Analysis (LSA) can measure the degree of similarity between two words, but not between two relations. Recently the Vector Space Model (VSM) of information retrieval has been adapted to the task of measuring relational similarity, achieving a score of 47% on a collection of 374 college-level multiple-choice word analogy questions. In the VSM approach, the relation between a pair of words is characterized by a vector of frequencies of predefined patterns in a large corpus. LRA extends the VSM approach in three ways: (1) the patterns are derived automatically from the corpus (they are not predefined), (2) the Singular Value Decomposition (SVD) is used to smooth the frequency data (it is also used this way in LSA), and (3) automatically generated synonyms are used to explore reformulations of the word pairs. LRA achieves 56% on the 374 analogy questions, statistically equivalent to the average human score of 57%. On the related problem of classifying noun-modifier relations, LRA achieves similar gains over the VSM, while using a smaller corpus
    • …
    corecore