848 research outputs found

    Image Reconstruction from Bag-of-Visual-Words

    Full text link
    The objective of this work is to reconstruct an original image from Bag-of-Visual-Words (BoVW). Image reconstruction from features can be a means of identifying the characteristics of features. Additionally, it enables us to generate novel images via features. Although BoVW is the de facto standard feature for image recognition and retrieval, successful image reconstruction from BoVW has not been reported yet. What complicates this task is that BoVW lacks the spatial information for including visual words. As described in this paper, to estimate an original arrangement, we propose an evaluation function that incorporates the naturalness of local adjacency and the global position, with a method to obtain related parameters using an external image database. To evaluate the performance of our method, we reconstruct images of objects of 101 kinds. Additionally, we apply our method to analyze object classifiers and to generate novel images via BoVW

    Automatic annotation for weakly supervised learning of detectors

    Get PDF
    PhDObject detection in images and action detection in videos are among the most widely studied computer vision problems, with applications in consumer photography, surveillance, and automatic media tagging. Typically, these standard detectors are fully supervised, that is they require a large body of training data where the locations of the objects/actions in images/videos have been manually annotated. With the emergence of digital media, and the rise of high-speed internet, raw images and video are available for little to no cost. However, the manual annotation of object and action locations remains tedious, slow, and expensive. As a result there has been a great interest in training detectors with weak supervision where only the presence or absence of object/action in image/video is needed, not the location. This thesis presents approaches for weakly supervised learning of object/action detectors with a focus on automatically annotating object and action locations in images/videos using only binary weak labels indicating the presence or absence of object/action in images/videos. First, a framework for weakly supervised learning of object detectors in images is presented. In the proposed approach, a variation of multiple instance learning (MIL) technique for automatically annotating object locations in weakly labelled data is presented which, unlike existing approaches, uses inter-class and intra-class cue fusion to obtain the initial annotation. The initial annotation is then used to start an iterative process in which standard object detectors are used to refine the location annotation. Finally, to ensure that the iterative training of detectors do not drift from the object of interest, a scheme for detecting model drift is also presented. Furthermore, unlike most other methods, our weakly supervised approach is evaluated on data without manual pose (object orientation) annotation. Second, an analysis of the initial annotation of objects, using inter-class and intra-class cues, is carried out. From the analysis, a new method based on negative mining (NegMine) is presented for the initial annotation of both object and action data. The NegMine based approach is a much simpler formulation using only inter-class measure and requires no complex combinatorial optimisation but can still meet or outperform existing approaches including the previously pre3 sented inter-intra class cue fusion approach. Furthermore, NegMine can be fused with existing approaches to boost their performance. Finally, the thesis will take a step back and look at the use of generic object detectors as prior knowledge in weakly supervised learning of object detectors. These generic object detectors are typically based on sampling saliency maps that indicate if a pixel belongs to the background or foreground. A new approach to generating saliency maps is presented that, unlike existing approaches, looks beyond the current image of interest and into images similar to the current image. We show that our generic object proposal method can be used by itself to annotate the weakly labelled object data with surprisingly high accuracy

    An Efficient Perceptual of Content Based Image Retrieval System Using SVM and Evolutionary Algorithms

    Get PDF
    The CBIR tends to index and retrieve images based on their visual content. CBIR avoids several issues related to traditional ways that of retrieving images by keywords. Thus, a growing interest within the area of CBIR has been established in recent years. The performance of a CBIR system mainly depends on the particular image representation and similarity matching operate utilized. The CBIR tends to index and retrieve images supported their visual content. CBIR avoids several issues related to traditional ways that of retrieving images by keywords. Thus, a growing interest within the area of CBIR has been established in recent years. The performance of a CBIR system principally depends on the actual image illustration and similarity matching operate utilized. therefore a replacement CBIR system is projected which can give accurate results as compared to previously developed systems. This introduces the new composite framework for image classification in a content-based image retrieval system. The projected composite framework uses an evolutionary algorithm to select training samples for support vector machine (SVM). to style such a system, the most popular techniques of content-based image retrieval are reviewed initial. Our review reveals some limitations of the existing techniques, preventing them to accurately address some issues

    Hidden and Unknown Object Detection in Video

    Full text link
    Object detection is applied to find such actual objects as faces, bicycles and buildings in images and videos. The algorithms executed in object detection normally use extracted features and learning algorithms to distinguish object category. It is often implemented in such processes as image retrieval, security, surveillance and automated vehicle parking system.Objects can be detected through a range of models, including Feature-based object detection, Viola-Jones object detection, SVM classification with histograms of oriented gradients (HOG) features, Image segmentation and blob analysis.For detection of hidden objects in the video the Object-class detection method is used, in which case the object or objects are defined in the video in advance [1][2].The proposed method is based on bitwise XOR comparison [3]. The method (system) detects moving as well as static hidden objects.The developed method detects objects with great accuracy it detects also those hidden objects which have great color resemblance to the background images, which are undetectable for a human eye. There is no need to define or describe the searched object before the detection. Thus, the algorithm does not limit the search of the object depending on its type. The algorithm is developed to detect objects of any type and size. It is calculated so to work in case of weather change as well as at any time during a day irrespective of the brightness of the sun (which leads to the increase or the decrease of the intensity of the brightness of an image) in this way the method works dynamically. A system has been developed to execute the method. Object detection is applied to find such actual objects as faces, bicycles and buildings in images and videos. The algorithms executed in object detection normally use extracted features and learning algorithms to distinguish object category. It is often implemented in such processes as image retrieval, security, surveillance and automated vehicle parking system.Objects can be detected through a range of models, including Feature-based object detection, Viola-Jones object detection, SVM classification with histograms of oriented gradients (HOG) features, Image segmentation and blob analysis.For detection of hidden objects in the video the Object-class detection method is used, in which case the object or objects are defined in the video in advance [1][2].The proposed method is based on bitwise XOR comparison [3]. The method (system) detects moving as well as static hidden objects.The developed method detects objects with great accuracy it detects also those hidden objects which have great color resemblance to the background images, which are undetectable for a human eye. There is no need to define or describe the searched object before the detection. Thus, the algorithm does not limit the search of the object depending on its type. The algorithm is developed to detect objects of any type and size. It is calculated so to work in case of weather change as well as at any time during a day irrespective of the brightness of the sun (which leads to the increase or the decrease of the intensity of the brightness of an image) in this way the method works dynamically. A system has been developed to execute the method.nbs

    Feature Extraction and Classification of Automatically Segmented Lung Lesion Using Improved Toboggan Algorithm

    Full text link
    The accurate detection of lung lesions from computed tomography (CT) scans is essential for clinical diagnosis. It provides valuable information for treatment of lung cancer. However, the process is exigent to achieve a fully automatic lesion detection. Here, a novel segmentation algorithm is proposed, it's an improved toboggan algorithm with a three-step framework, which includes automatic seed point selection, multi-constraints lesion extraction and the lesion refinement. Then, the features like local binary pattern (LBP), wavelet, contourlet, grey level co-occurence matrix (GLCM) are applied to each region of interest of the segmented lung lesion image to extract the texture features such as contrast, homogeneity, energy, entropy and statistical extraction like mean, variance, standard deviation, convolution of modulated and normal frequencies. Finally, support vector machine (SVM) and K-nearest neighbour (KNN) classifiers are applied to classify the abnormal region based on the performance of the extracted features and their performance is been compared. The accuracy of 97.8% is been obtained by using SVM classifier when compared to KNN classifier. This approach does not require any human interaction for lesion detection. Thus, the improved toboggan algorithm can achieve precise lung lesion segmentation in CT images. The features extracted also helps to classify the lesion region of lungs efficiently

    Sparse Modeling for Image and Vision Processing

    Get PDF
    In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection---that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics and Visio

    Novel Application of Neutrosophic Logic in Classifiers Evaluated under Region-Based Image Categorization System

    Get PDF
    Neutrosophic logic is a relatively new logic that is a generalization of fuzzy logic. In this dissertation, for the first time, neutrosophic logic is applied to the field of classifiers where a support vector machine (SVM) is adopted as the example to validate the feasibility and effectiveness of neutrosophic logic. The proposed neutrosophic set is integrated into a reformulated SVM, and the performance of the achieved classifier N-SVM is evaluated under an image categorization system. Image categorization is an important yet challenging research topic in computer vision. In this dissertation, images are first segmented by a hierarchical two-stage self organizing map (HSOM), using color and texture features. A novel approach is proposed to select the training samples of HSOM based on homogeneity properties. A diverse density support vector machine (DD-SVM) framework that extends the multiple-instance learning (MIL) technique is then applied to the image categorization problem by viewing an image as a bag of instances corresponding to the regions obtained from the image segmentation. Using the instance prototype, every bag is mapped to a point in the new bag space, and the categorization is transformed to a classification problem. Then, the proposed N-SVM based on the neutrosophic set is used as the classifier in the new bag space. N-SVM treats samples differently according to the weighting function, and it helps reduce the effects of outliers. Experimental results on a COREL dataset of 1000 general purpose images and a Caltech 101 dataset of 9000 images demonstrate the validity and effectiveness of the proposed method
    corecore