1,153 research outputs found

    SVS-JOIN : efficient spatial visual similarity join for geo-multimedia

    Get PDF
    In the big data era, massive amount of multimedia data with geo-tags has been generated and collected by smart devices equipped with mobile communications module and position sensor module. This trend has put forward higher request on large-scale geo-multimedia retrieval. Spatial similarity join is one of the significant problems in the area of spatial database. Previous works focused on spatial textual document search problem, rather than geo-multimedia retrieval. In this paper, we investigate a novel geo-multimedia retrieval paradigm named spatial visual similarity join (SVS-JOIN for short), which aims to search similar geo-image pairs in both aspects of geo-location and visual content. Firstly, the definition of SVS-JOIN is proposed and then we present the geographical similarity and visual similarity measurement. Inspired by the approach for textual similarity join, we develop an algorithm named SVS-JOIN B by combining the PPJOIN algorithm and visual similarity. Besides, an extension of it named SVS-JOIN G is developed, which utilizes spatial grid strategy to improve the search efficiency. To further speed up the search, a novel approach called SVS-JOIN Q is carefully designed, in which a quadtree and a global inverted index are employed. Comprehensive experiments are conducted on two geo-image datasets and the results demonstrate that our solution can address the SVS-JOIN problem effectively and efficiently

    Location Estimation of a Photo: A Geo-signature MapReduce Workflow

    Get PDF
    Location estimation of a photo is the method to find the location where the photo was taken that is a new branch of image retrieval. Since a large number of photos are shared on the social multimedia. Some photos are without geo-tagging which can be estimated their location with the help of million geo-tagged photos from the social multimedia. Recent researches about the location estimation of a photo are available. However, most of them are neglectful to define the uniqueness of one place that is able to be totally distinguished from other places. In this paper, we design a workflow named G-sigMR (Geo-signature MapReduce) for the improvement of recognition performance. Our workflow generates the uniqueness of a location named Geo-signature which is summarized from the visual synonyms with the MapReduce structure for indexing to the large-scale dataset. In light of the validity for image retrieval, our G-sigMR was quantitatively evaluated using the standard benchmark specific for location estimation; to compare with other well-known approaches (IM2GPS, SC, CS, MSER, VSA and VCG) in term of average recognition rate. From the results, G-sigMR outperformed previous approaches.Location estimation of a photo is the method to find the location where the photo was taken that is a new branch of image retrieval. Since a large number of photos are shared on the social multimedia. Some photos are without geo-tagging which can be estimated their location with the help of million geo-tagged photos from the social multimedia. Recent researches about the location estimation of a photo are available. However, most of them are neglectful to define the uniqueness of one place that is able to be totally distinguished from other places. In this paper, we design a workflow named G-sigMR (Geo-signature MapReduce) for the improvement of recognition performance. Our workflow generates the uniqueness of a location named Geo-signature which is summarized from the visual synonyms with the MapReduce structure for indexing to the large-scale dataset. In light of the validity for image retrieval, our G-sigMR was quantitatively evaluated using the standard benchmark specific for location estimation; to compare with other well-known approaches (IM2GPS, SC, CS, MSER, VSA and VCG) in term of average recognition rate. From the results, G-sigMR outperformed previous approaches

    Automated annotation of landmark images using community contributed datasets and web resources

    Get PDF
    A novel solution to the challenge of automatic image annotation is described. Given an image with GPS data of its location of capture, our system returns a semantically-rich annotation comprising tags which both identify the landmark in the image, and provide an interesting fact about it, e.g. "A view of the Eiffel Tower, which was built in 1889 for an international exhibition in Paris". This exploits visual and textual web mining in combination with content-based image analysis and natural language processing. In the first stage, an input image is matched to a set of community contributed images (with keyword tags) on the basis of its GPS information and image classification techniques. The depicted landmark is inferred from the keyword tags for the matched set. The system then takes advantage of the information written about landmarks available on the web at large to extract a fact about the landmark in the image. We report component evaluation results from an implementation of our solution on a mobile device. Image localisation and matching oers 93.6% classication accuracy; the selection of appropriate tags for use in annotation performs well (F1M of 0.59), and it subsequently automatically identies a correct toponym for use in captioning and fact extraction in 69.0% of the tested cases; finally the fact extraction returns an interesting caption in 78% of cases
    corecore