76,103 research outputs found

    Semantic categories underlying the meaning of ā€˜placeā€™

    Get PDF
    This paper analyses the semantics of natural language expressions that are associated with the intuitive notion of ā€˜placeā€™. We note that the nature of such terms is highly contested, and suggest that this arises from two main considerations: 1) there are a number of logically distinct categories of place expression, which are not always clearly distinguished in discourse about ā€˜placeā€™; 2) the many non-substantive place count nouns (such as ā€˜placeā€™, ā€˜regionā€™, ā€˜areaā€™, etc.) employed in natural language are highly ambiguous. With respect to consideration 1), we propose that place-related expressions should be classified into the following distinct logical types: a) ā€˜place-likeā€™ count nouns (further subdivided into abstract, spatial and substantive varieties), b) proper names of ā€˜place-likeā€™ objects, c) locative property phrases, and d) definite descriptions of ā€˜place-likeā€™ objects. We outline possible formal representations for each of these. To address consideration 2), we examine meanings, connotations and ambiguities of the English vocabulary of abstract and generic place count nouns, and identify underlying elements of meaning, which explain both similarities and differences in the sense and usage of the various terms

    SIFTing the relevant from the irrelevant: Automatically detecting objects in training images

    Get PDF
    Many state-of-the-art object recognition systems rely on identifying the location of objects in images, in order to better learn its visual attributes. In this paper, we propose four simple yet powerful hybrid ROI detection methods (combining both local and global features), based on frequently occurring keypoints. We show that our methods demonstrate competitive performance in two different types of datasets, the Caltech101 dataset and the GRAZ-02 dataset, where the pairs of keypoint bounding box method achieved the best accuracies overall

    Active Object Localization in Visual Situations

    Get PDF
    We describe a method for performing active localization of objects in instances of visual situations. A visual situation is an abstract concept---e.g., "a boxing match", "a birthday party", "walking the dog", "waiting for a bus"---whose image instantiations are linked more by their common spatial and semantic structure than by low-level visual similarity. Our system combines given and learned knowledge of the structure of a particular situation, and adapts that knowledge to a new situation instance as it actively searches for objects. More specifically, the system learns a set of probability distributions describing spatial and other relationships among relevant objects. The system uses those distributions to iteratively sample object proposals on a test image, but also continually uses information from those object proposals to adaptively modify the distributions based on what the system has detected. We test our approach's ability to efficiently localize objects, using a situation-specific image dataset created by our group. We compare the results with several baselines and variations on our method, and demonstrate the strong benefit of using situation knowledge and active context-driven localization. Finally, we contrast our method with several other approaches that use context as well as active search for object localization in images.Comment: 14 page
    • ā€¦
    corecore