114,086 research outputs found

    NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES

    Get PDF
    Semantic annotation of images is an important research topic on both image understanding and database or web image search. Image annotation is a technique to choosing appropriate labels for images with extracting effective and hidden feature in pictures. In the feature extraction step of proposed method, we present a model, which combined effective features of visual topics (global features over an image) and regional contexts (relationship between the regions in Image and each other regions images) to automatic image annotation.In the annotation step of proposed method, we create a new ontology (base on WordNet ontology) for the semantic relationships between tags in the classification and improving semantic gap exist in the automatic image annotation.Experiments result on the 5k Corel dataset show the proposed method of image annotation in addition to reducing the complexity of the classification, increased accuracy compared to the another method

    Automated annotation of landmark images using community contributed datasets and web resources

    Get PDF
    A novel solution to the challenge of automatic image annotation is described. Given an image with GPS data of its location of capture, our system returns a semantically-rich annotation comprising tags which both identify the landmark in the image, and provide an interesting fact about it, e.g. "A view of the Eiffel Tower, which was built in 1889 for an international exhibition in Paris". This exploits visual and textual web mining in combination with content-based image analysis and natural language processing. In the first stage, an input image is matched to a set of community contributed images (with keyword tags) on the basis of its GPS information and image classification techniques. The depicted landmark is inferred from the keyword tags for the matched set. The system then takes advantage of the information written about landmarks available on the web at large to extract a fact about the landmark in the image. We report component evaluation results from an implementation of our solution on a mobile device. Image localisation and matching oers 93.6% classication accuracy; the selection of appropriate tags for use in annotation performs well (F1M of 0.59), and it subsequently automatically identies a correct toponym for use in captioning and fact extraction in 69.0% of the tested cases; finally the fact extraction returns an interesting caption in 78% of cases

    Real-time registration of paper watermarks

    Get PDF
    The aim of this article is to outline the issues involved in the application of machine vision to the automatic extraction and registration of watermarks from continuous web paper. The correct identification and localization of watermarks are key issues in paper manufacturing. As well as requiring the position of the watermark for defect detection and classification, it is necessary to insure its position on the paper prior to the cutting process. Two paper types are discussed, with and without laid and chain lines (these lines appear as a complex periodic background to the watermark and further complicate the segmentation process). We will examine both morphological and Fourier approaches to the watermark segmentation process, concentrating specifically on those images with complex backgrounds. Finally we detail a system design suitable for real-time implementation

    Learning Deep Visual Object Models From Noisy Web Data: How to Make it Work

    Full text link
    Deep networks thrive when trained on large scale data collections. This has given ImageNet a central role in the development of deep architectures for visual object classification. However, ImageNet was created during a specific period in time, and as such it is prone to aging, as well as dataset bias issues. Moving beyond fixed training datasets will lead to more robust visual systems, especially when deployed on robots in new environments which must train on the objects they encounter there. To make this possible, it is important to break free from the need for manual annotators. Recent work has begun to investigate how to use the massive amount of images available on the Web in place of manual image annotations. We contribute to this research thread with two findings: (1) a study correlating a given level of noisily labels to the expected drop in accuracy, for two deep architectures, on two different types of noise, that clearly identifies GoogLeNet as a suitable architecture for learning from Web data; (2) a recipe for the creation of Web datasets with minimal noise and maximum visual variability, based on a visual and natural language processing concept expansion strategy. By combining these two results, we obtain a method for learning powerful deep object models automatically from the Web. We confirm the effectiveness of our approach through object categorization experiments using our Web-derived version of ImageNet on a popular robot vision benchmark database, and on a lifelong object discovery task on a mobile robot.Comment: 8 pages, 7 figures, 3 table

    Article Operational Ship Monitoring System Based on Synthetic Aperture Radar Processing

    Get PDF
    Abstract: This paper presents a Ship Monitoring System (SIMONS) working with Synthetic Aperture Radar (SAR) images. It is able to infer ship detection and classification information, and merge the results with other input channels, such as polls from the Automatic Identification System (AIS). Two main stages can be identified, namely: SAR processing and data dissemination. The former has three independent modules, which are related to Coastline Detection (CD), Ship Detection (SD) and Ship Classification (SC). The later is solved via an advanced web interface, which is compliant with the OpenSource standards fixed by the Open Geospatial Consortium (OGC). SIMONS has been designed to be a modular, unsupervised and reliable system that meets Near-Real Time (NRT) delivery requirements. From data ingestion to product delivery, the processing chain is fully automatic accepting ERS and ENVISAT formats. SIMONS has been developed by GMV Aerospace, S.A. with three main goals, namely: 1) To limit the dependence on the ancillary information provided by systems such as AIS. 2) To achieve the maximum level of automatism and restrict human manipulation. 3) To limit the error sources and their propagation
    corecore