111 research outputs found

    Fish4Knowledge: Collecting and Analyzing Massive Coral Reef Fish Video Data

    Get PDF
    This book gives a start-to-finish overview of the whole Fish4Knowledge project, in 18 short chapters, each describing one aspect of the project. The Fish4Knowledge project explored the possibilities of big video data, in this case from undersea video. Recording and analyzing 90 thousand hours of video from ten camera locations, the project gives a 3 year view of fish abundance in several tropical coral reefs off the coast of Taiwan. The research system built a remote recording network, over 100 Tb of storage, supercomputer processing, video target detection and

    NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES

    Get PDF
    Semantic annotation of images is an important research topic on both image understanding and database or web image search. Image annotation is a technique to choosing appropriate labels for images with extracting effective and hidden feature in pictures. In the feature extraction step of proposed method, we present a model, which combined effective features of visual topics (global features over an image) and regional contexts (relationship between the regions in Image and each other regions images) to automatic image annotation.In the annotation step of proposed method, we create a new ontology (base on WordNet ontology) for the semantic relationships between tags in the classification and improving semantic gap exist in the automatic image annotation.Experiments result on the 5k Corel dataset show the proposed method of image annotation in addition to reducing the complexity of the classification, increased accuracy compared to the another method

    Patch-based semantic labelling of images.

    Get PDF
    PhDThe work presented in this thesis is focused at associating a semantics to the content of an image, linking the content to high level semantic categories. The process can take place at two levels: either at image level, towards image categorisation, or at pixel level, in se- mantic segmentation or semantic labelling. To this end, an analysis framework is proposed, and the different steps of part (or patch) extraction, description and probabilistic modelling are detailed. Parts of different nature are used, and one of the contributions is a method to complement information associated to them. Context for parts has to be considered at different scales. Short range pixel dependences are accounted by associating pixels to larger patches. A Conditional Random Field, that is, a probabilistic discriminative graphical model, is used to model medium range dependences between neighbouring patches. Another contribution is an efficient method to consider rich neighbourhoods without having loops in the inference graph. To this end, weak neighbours are introduced, that is, neighbours whose label probability distribution is pre-estimated rather than mutable during the inference. Longer range dependences, that tend to make the inference problem intractable, are addressed as well. A novel descriptor based on local histograms of visual words has been proposed, meant to both complement the feature descriptor of the patches and augment the context awareness in the patch labelling process. Finally, an alternative approach to consider multiple scales in a hierarchical framework based on image pyramids is proposed. An image pyramid is a compositional representation of the image based on hierarchical clustering. All the presented contributions are extensively detailed throughout the thesis, and experimental results performed on publicly available datasets are reported to assess their validity. A critical comparison with the state of the art in this research area is also presented, and the advantage in adopting the proposed improvements are clearly highlighted

    Large-scale image collection cleansing, summarization and exploration

    Get PDF
    A perennially interesting topic in the research field of large scale image collection organization is how to effectively and efficiently conduct the tasks of image cleansing, summarization and exploration. The primary objective of such an image organization system is to enhance user exploration experience with redundancy removal and summarization operations on large-scale image collection. An ideal system is to discover and utilize the visual correlation among the images, to reduce the redundancy in large-scale image collection, to organize and visualize the structure of large-scale image collection, and to facilitate exploration and knowledge discovery. In this dissertation, a novel system is developed for exploiting and navigating large-scale image collection. Our system consists of the following key components: (a) junk image filtering by incorporating bilingual search results; (b) near duplicate image detection by using a coarse-to-fine framework; (c) concept network generation and visualization; (d) image collection summarization via dictionary learning for sparse representation; and (e) a multimedia practice of graffiti image retrieval and exploration. For junk image filtering, bilingual image search results, which are adopted for the same keyword-based query, are integrated to automatically identify the clusters for the junk images and the clusters for the relevant images. Within relevant image clusters, the results are further refined by removing the duplications under a coarse-to-fine structure. The duplicate pairs are detected with both global feature (partition based color histogram) and local feature (CPAM and SIFT Bag-of-Word model). The duplications are detected and removed from the data collection to facilitate further exploration and visual correlation analysis. After junk image filtering and duplication removal, the visual concepts are further organized and visualized by the proposed concept network. An automatic algorithm is developed to generate such visual concept network which characterizes the visual correlation between image concept pairs. Multiple kernels are combined and a kernel canonical correlation analysis algorithm is used to characterize the diverse visual similarity contexts between the image concepts. The FishEye visualization technique is implemented to facilitate the navigation of image concepts through our image concept network. To better assist the exploration of large scale data collection, we design an efficient summarization algorithm to extract representative examplars. For this collection summarization task, a sparse dictionary (a small set of the most representative images) is learned to represent all the images in the given set, e.g., such sparse dictionary is treated as the summary for the given image set. The simulated annealing algorithm is adopted to learn such sparse dictionary (image summary) by minimizing an explicit optimization function. In order to handle large scale image collection, we have evaluated both the accuracy performance of the proposed algorithms and their computation efficiency. For each of the above tasks, we have conducted experiments on multiple public available image collections, such as ImageNet, NUS-WIDE, LabelMe, etc. We have observed very promising results compared to existing frameworks. The computation performance is also satisfiable for large-scale image collection applications. The original intention to design such a large-scale image collection exploration and organization system is to better service the tasks of information retrieval and knowledge discovery. For this purpose, we utilize the proposed system to a graffiti retrieval and exploration application and receive positive feedback

    Visual Domain Ontology using OWL Lite for Semantic Image Processing

    Get PDF
    In this paper, a visual domain ontology (VDO) is constructed using OWL-Lite Language. The VDO passes through two execution phases, namely, construction and inferring phases. In the construction phase, OWL classes are initialized, with reference to annotated scenes, and connected by hierarchical, spatial, and content-based relationships (presence/absence of some objects depends on other objects). In the inferring phase, the VDO is used to infer knowledge about an unknown scene. This paper aims to use a standard language, namely, OWL, to represent non-standard visual knowledge; facilitate straightforward ontology enrichment; and define the rules for inferring based on the constructed ontology. The OWL standardizes the constructed knowledge and facilitates advanced inferring because it is built on top of the first-order logic and description logic. The VDO then allows an efficient representation and reasoning of complex visual knowledge. In addition to representation, the VDO enables easy extension, sharing, and reuse of the represented visual knowledge

    I2T: Image Parsing to Text Description

    Full text link

    IMAGE RETRIEVAL BASED ON COMPLEX DESCRIPTIVE QUERIES

    Get PDF
    The amount of visual data such as images and videos available over web has increased exponentially over the last few years. In order to efficiently organize and exploit these massive collections, a system, apart from being able to answer simple classification based questions such as whether a specific object is present (or absent) in an image, should also be capable of searching images and videos based on more complex descriptive questions. There is also a considerable amount of structure present in the visual world which, if effectively utilized, can help achieve this goal. To this end, we first present an approach for image ranking and retrieval based on queries consisting of multiple semantic attributes. We further show that there are significant correlations present between these attributes and accounting for them can lead to superior performance. Next, we extend this by proposing an image retrieval framework for descriptive queries composed of object categories, semantic attributes and spatial relationships. The proposed framework also includes a unique multi-view hashing technique, which enables query specification in three different modalities - image, sketch and text. We also demonstrate the effectiveness of leveraging contextual information to reduce the supervision requirements for learning object and scene recognition models. We present an active learning framework to simultaneously learn appearance and contextual models for scene understanding. Within this framework we introduce new kinds of labeling questions that are designed to collect appearance as well as contextual information and which mimic the way in which humans actively learn about their environment. Furthermore we explicitly model the contextual interactions between the regions within an image and select the question which leads to the maximum reduction in the combined entropy of all the regions in the image (image entropy)
    • …
    corecore