829 research outputs found

    Automated annotation of landmark images using community contributed datasets and web resources

    Get PDF
    A novel solution to the challenge of automatic image annotation is described. Given an image with GPS data of its location of capture, our system returns a semantically-rich annotation comprising tags which both identify the landmark in the image, and provide an interesting fact about it, e.g. "A view of the Eiffel Tower, which was built in 1889 for an international exhibition in Paris". This exploits visual and textual web mining in combination with content-based image analysis and natural language processing. In the first stage, an input image is matched to a set of community contributed images (with keyword tags) on the basis of its GPS information and image classification techniques. The depicted landmark is inferred from the keyword tags for the matched set. The system then takes advantage of the information written about landmarks available on the web at large to extract a fact about the landmark in the image. We report component evaluation results from an implementation of our solution on a mobile device. Image localisation and matching oers 93.6% classication accuracy; the selection of appropriate tags for use in annotation performs well (F1M of 0.59), and it subsequently automatically identies a correct toponym for use in captioning and fact extraction in 69.0% of the tested cases; finally the fact extraction returns an interesting caption in 78% of cases

    A Non-Visual Photo Collection Browser based on Automatically Generated Text Descriptions

    Get PDF
    This study presents a textual photo collection browser that automatically and quickly analyses large personal photo collections and produces textual reports that can be accessed by blind users using either text-to-speech or Braille output devices. The textual photo browser exploits recent advances in image collection analysis and the strategy does not rely on manual image tagging. The reports produced by the textual image browser gives the user a gist about where, when and what the photographer was doing in the form of a story. Although yet crude, the strategy can give blind users a valuable overview about the contents of large image collections and individual images which otherwise are totally inaccessible without vision

    A configural dominant account of contextual cueing : configural cues are stronger than colour cues

    Get PDF
    Previous work has shown that reaction times to find a target in displays that have been repeated are faster than those for displays that have never been seen before. This learning effect, termed “contextual cueing” (CC), has been shown using contexts such as the configuration of the distractors in the display and the background colour. However, it is not clear how these two contexts interact to facilitate search. We investigated this here by comparing the strengths of these two cues when they appeared together. In Experiment 1, participants searched for a target that was cued by both colour and distractor configural cues, compared with when the target was only predicted by configural information. The results showed that the addition of a colour cue did not increase contextual cueing. In Experiment 2, participants searched for a target that was cued by both colour and distractor configuration compared with when the target was only cued by colour. The results showed that adding a predictive configural cue led to a stronger CC benefit. Experiments 3 and 4 tested the disruptive effects of removing either a learned colour cue or a learned configural cue and whether there was cue competition when colour and configural cues were presented together. Removing the configural cue was more disruptive to CC than removing colour, and configural learning was shown to overshadow the learning of colour cues. The data support a configural dominant account of CC, where configural cues act as the stronger cue in comparison to colour when they are presented together

    Digital Image

    Full text link
    This paper considers the ontological significance of invisibility in relation to the question ‘what is a digital image?’ Its argument in a nutshell is that the emphasis on visibility comes at the expense of latency and is symptomatic of the style of thinking that dominated Western philosophy since Plato. This privileging of visible content necessarily binds images to linguistic (semiotic and structuralist) paradigms of interpretation which promote representation, subjectivity, identity and negation over multiplicity, indeterminacy and affect. Photography is the case in point because until recently critical approaches to photography had one thing in common: they all shared in the implicit and incontrovertible understanding that photographs are a medium that must be approached visually; they took it as a given that photographs are there to be looked at and they all agreed that it is only through the practices of spectatorship that the secrets of the image can be unlocked. Whatever subsequent interpretations followed, the priori- ty of vision in relation to the image remained unperturbed. This undisputed belief in the visibility of the image has such a strong grasp on theory that it imperceptibly bonded together otherwise dissimilar and sometimes contradictory methodol- ogies, preventing them from noticing that which is the most unexplained about images: the precedence of looking itself. This self-evident truth of visibility casts a long shadow on im- age theory because it blocks the possibility of inquiring after everything that is invisible, latent and hidden

    BRUL\`E: Barycenter-Regularized Unsupervised Landmark Extraction

    Full text link
    Unsupervised retrieval of image features is vital for many computer vision tasks where the annotation is missing or scarce. In this work, we propose a new unsupervised approach to detect the landmarks in images, validating it on the popular task of human face key-points extraction. The method is based on the idea of auto-encoding the wanted landmarks in the latent space while discarding the non-essential information (and effectively preserving the interpretability). The interpretable latent space representation (the bottleneck containing nothing but the wanted key-points) is achieved by a new two-step regularization approach. The first regularization step evaluates transport distance from a given set of landmarks to some average value (the barycenter by Wasserstein distance). The second regularization step controls deviations from the barycenter by applying random geometric deformations synchronously to the initial image and to the encoded landmarks. We demonstrate the effectiveness of the approach both in unsupervised and semi-supervised training scenarios using 300-W, CelebA, and MAFL datasets. The proposed regularization paradigm is shown to prevent overfitting, and the detection quality is shown to improve beyond the state-of-the-art face models.Comment: 10 main pages with 6 figures and 1 Table, 14 pages total with 6 supplementary figures. I.B. and N.B. contributed equally. D.V.D. is corresponding autho

    Context-sensitive interpretation of natural language location descriptions : a thesis submitted in partial fulfilment of the requirements for the award of Doctor of Philosophy in Information Technology at Massey University, Auckland, New Zealand

    Get PDF
    People frequently describe the locations of objects using natural language. Location descriptions may be either structured, such as 26 Victoria Street, Auckland, or unstructured. Relative location descriptions (e.g., building near Sky Tower) are a common form of unstructured location description, and use qualitative terms to describe the location of one object relative to another (e.g., near, close to, in, next to). Understanding the meaning of these terms is easy for humans, but much more difficult for machines since the terms are inherently vague and context sensitive. In this thesis, we study the semantics (or meaning) of qualitative, geospatial relation terms, specifically geospatial prepositions. Prepositions are one of the most common forms of geospatial relation term, and they are commonly used to describe the location of objects in the geographic (geospatial) environment, such as rivers, mountains, buildings, and towns. A thorough understanding of the semantics of geospatial relation terms is important because it enables more accurate automated georeferencing of text location descriptions than use of place names only. Location descriptions that use geospatial prepositions are found in social media, web sites, blogs, and academic reports, and georeferencing can allow mapping of health, disaster and biological data that is currently inaccessible to the public. Such descriptions have unstructured format, so, their analysis is not straightforward. The specific research questions that we address are: RQ1. Which geospatial prepositions (or groups of prepositions) and senses are semantically similar? RQ2. Is the role of context important in the interpretation of location descriptions? RQ3. Is the object distance associated with geospatial prepositions across a range of geospatial scenes and scales accurately predictable using machine learning methods? RQ4. Is human annotation a reliable form of annotation for the analysis of location descriptions? To address RQ1, we determine the nature and degree of similarity among geospatial prepositions by analysing data collected with a human subjects experiment, using clustering, extensional mapping and t-stochastic neighbour embedding (t-SNE) plots to form a semantic similarity matrix. In addition to calculating similarity scores among prepositions, we identify the senses of three groups of geospatial prepositions using Venn diagrams, t-sne plots and density-based clustering, and define the relationships between the senses. Furthermore, we use two text mining approaches to identify the degree of similarity among geospatial prepositions: bag of words and GloVe embeddings. By using these methods and further analysis, we identify semantically similar groups of geospatial prepositions including: 1- beside, close to, near, next to, outside and adjacent to; 2- across, over and through and 3- beyond, past, by and off. The prepositions within these groups also share senses. Through is recognised as a specialisation of both across and over. Proximity and adjacency prepositions also have similar senses that express orientation and overlapping relations. Past, off and by share a proximal sense but beyond has a different sense from these, representing on the other side. Another finding is the more frequent use of the preposition close to for pairs of linear objects than near, which is used more frequently for non-linear ones. Also, next to is used to describe proximity more than touching (in contrast to other prepositions like adjacent to). Our application of text mining to identify semantically similar prepositions confirms that a geospatial corpus (NCGL) provides a better representation of the semantics of geospatial prepositions than a general corpus. Also, we found that GloVe embeddings provide adequate semantic similarity measures for more specialised geospatial prepositions, but less so for those that have more generalised applications and multiple senses. We explore the role of context (RQ2) by studying three sites that vary in size, nature, and context in London: Trafalgar Square, Buckingham Palace, and Hyde Park. We use the Google search engine to extract location descriptions that contain these three sites with 9 different geospatial prepositions (in, on, at, next to, close to, adjacent to, near, beside, outside) and calculate their acceptance profiles (the profile of the use of a preposition at different distances from the reference object) and acceptance thresholds (maximum distance from a reference object at which a preposition can acceptably be used). We use these to compare prepositions, and to explore the influence of different contexts. Our results show that near, in and outside are used for larger distances, while beside, adjacent to and at are used for smaller distances. Also, the acceptance threshold for close to is higher than for other proximity/adjacency prepositions such as next to, adjacent to and beside. The acceptance threshold of next to is larger than adjacent to, which confirms the findings in ‎Chapter 2 which identifies next to describing a proximity rather than touching spatial relation. We also found that relatum characteristics such as image schema affect the use of prepositions such as in, on and at. We address RQ3 by developing a machine learning regression model (using the SMOReg algorithm) to predict the distance associated with use of geospatial prepositions in specific expressions. We incorporate a wide range of input variables including the similarity matrix of geospatial prepositions (RQ1); preposition senses; semantic information in the form of embeddings; characteristics of the located and reference objects in the expression including their liquidity/solidity, scale and geometry type and contextual factors such as the density of features of different types in the surrounding area. We evaluate the model on two different datasets with 25% improvement against the best baseline respectively. Finally, we consider the importance of annotation of geospatial location descriptions (RQ4). As annotated data is essential for the successful study of automated interpretation of natural language descriptions, we study the impact and accuracy of human annotation on different geospatial elements. Agreement scores show that human annotators can annotate geospatial relation terms (e.g., geospatial prepositions) with higher agreement than other geospatial elements. This thesis advances understanding of the semantics of geospatial prepositions, particularly considering their semantic similarity and the impact of context on their interpretation. We quantify the semantic similarity of a set of 24 geospatial prepositions; identify senses and the relationships among them for 13 geospatial prepositions; compare the acceptance thresholds of 9 geospatial prepositions and describe the influence of context on them; and demonstrate that richer semantic and contextual information can be incorporated in predictive models to interpret relative geospatial location descriptions more accurately
    corecore