98,114 research outputs found

    Semantic categories underlying the meaning of ‘place’

    Get PDF
    This paper analyses the semantics of natural language expressions that are associated with the intuitive notion of ‘place’. We note that the nature of such terms is highly contested, and suggest that this arises from two main considerations: 1) there are a number of logically distinct categories of place expression, which are not always clearly distinguished in discourse about ‘place’; 2) the many non-substantive place count nouns (such as ‘place’, ‘region’, ‘area’, etc.) employed in natural language are highly ambiguous. With respect to consideration 1), we propose that place-related expressions should be classified into the following distinct logical types: a) ‘place-like’ count nouns (further subdivided into abstract, spatial and substantive varieties), b) proper names of ‘place-like’ objects, c) locative property phrases, and d) definite descriptions of ‘place-like’ objects. We outline possible formal representations for each of these. To address consideration 2), we examine meanings, connotations and ambiguities of the English vocabulary of abstract and generic place count nouns, and identify underlying elements of meaning, which explain both similarities and differences in the sense and usage of the various terms

    GBU-Description

    Get PDF

    HGC-Description

    Get PDF

    Identity Without Supervenience

    Get PDF

    Locating bugs without looking back

    Get PDF
    Bug localisation is a core program comprehension task in software maintenance: given the observation of a bug, e.g. via a bug report, where is it located in the source code? Information retrieval (IR) approaches see the bug report as the query, and the source code files as the documents to be retrieved, ranked by relevance. Such approaches have the advantage of not requiring expensive static or dynamic analysis of the code. However, current state-of-the-art IR approaches rely on project history, in particular previously fixed bugs or previous versions of the source code. We present a novel approach that directly scores each current file against the given report, thus not requiring past code and reports. The scoring method is based on heuristics identified through manual inspection of a small sample of bug reports. We compare our approach to eight others, using their own five metrics on their own six open source projects. Out of 30 performance indicators, we improve 27 and equal 2. Over the projects analysed, on average we find one or more affected files in the top 10 ranked files for 76% of the bug reports. These results show the applicability of our approach to software projects without history
    • …
    corecore