673 research outputs found

    Applying local cooccurring patterns for object detection from aerial images

    Full text link
    Developing a spatial searching tool to enhance the search car pabilities of large spatial repositories for Geographical Information System (GIS) update has attracted more and more attention. Typically, objects to be detected are represented by many local features or local parts. Testing images are processed by extracting local features which are then matched with the object's model image. Most existing work that uses local features assumes that each of the local features is independent to each other. However, in many cases, this is not true. In this paper, a method of applying the local cooccurring patterns to disclose the cooccurring relationships between local features for object detection is presented. Features including colour features and edge-based shape features of the interested object are collected. To reveal the cooccurring patterns among multiple local features, a colour cooccurrence histogram is constructed and used to search objects of interest from target images. The method is demonstrated in detecting swimming pools from aerial images. Our experimental results show the feasibility of using this method for effectively reducing the labour work in finding man-made objects of interest from aerial images. © Springer-Verlag Berlin Heidelberg 2007

    Local object patterns for representation and classification of colon tissue images

    Get PDF
    Cataloged from PDF version of article.This paper presents a new approach for the effective representation and classification of images of histopathological colon tissues stained with hematoxylin and eosin. In this approach, we propose to decompose a tissue image into its histological components and introduce a set of new texture descriptors, which we call local object patterns, on these components to model their composition within a tissue. We define these descriptors using the idea of local binary patterns, which quantify a pixel by constructing a binary string based on relative intensities of its neighbors. However, as opposed to pixel-level local binary patterns, we define our local object pattern descriptors at the component level to quantify a component. To this end, we specify neighborhoods with different locality ranges and encode spatial arrangements of the components within the specified local neighborhoods by generating strings. We then extract our texture descriptors from these strings to characterize histological components and construct the bag-of-words representation of an image from the characterized components. Working on microscopic images of colon tissues, our experiments reveal that the use of these component-level texture descriptors results in higher classification accuracies than the previous textural approaches. © 2013 IEEE

    Evaluating color texture descriptors under large variations of controlled lighting conditions

    Full text link
    The recognition of color texture under varying lighting conditions is still an open issue. Several features have been proposed for this purpose, ranging from traditional statistical descriptors to features extracted with neural networks. Still, it is not completely clear under what circumstances a feature performs better than the others. In this paper we report an extensive comparison of old and new texture features, with and without a color normalization step, with a particular focus on how they are affected by small and large variation in the lighting conditions. The evaluation is performed on a new texture database including 68 samples of raw food acquired under 46 conditions that present single and combined variations of light color, direction and intensity. The database allows to systematically investigate the robustness of texture descriptors across a large range of variations of imaging conditions.Comment: Submitted to the Journal of the Optical Society of America

    Discovering local cooccurring patterns from aerial images

    Full text link
    Developing a spatial searching engine to enhance the search capabilities of large spatial repositories for GIS update has attracted more and more attention. Existing methods are usually designed to extract limited types of objects and use only one aspect of features of Images. In this paper, we propose to use the local cooccurring patterns to disclose the cooccurring relationships among each dominant local features and use this local cooccurring patterns to recognize an object from aerial images. For this purpose, we investigate three types of local features: colour-based features, texture-based features, and edgebased shape features. In order to facilitate the feature extraction procedure, we first use discontinuity-preserving smoothing methods to filter the input image. Two popular smoothing techniques are tested and compared. Experimental results are presented in this paper

    Identification of White Blood Cells Using Machine Learning Classification Based on Feature Extraction

    Get PDF
    In various disease diagnoses, one of the parameters is white blood cells, consisting of eosinophils, basophils, neutrophils, lymphocytes, and monocytes. Manual identification takes a long time and tends to be subjective depending on the staff's experience, so the automatic identification of white blood cells will be faster and more accurate. White blood cells are identified by examining a colored blood smear (SADT) and examined under a digital microscope to obtain a cell image. Image identification of white blood cells is determined through HSV color space segmentation (Hue, Saturation Value) and feature extraction of the Gray Level Cooccurrence Matrix (GLCM) method using the Angular Second Moment (ASM), Contrast, Entropy, and Inverse Different Moment (IDM) features. The purpose of this study was to identify white blood cells by comparing the classification accuracy of the K-nearest neighbor (KNN), Naïve Bayes Classification (NBC), and Multilayer Perceptron (MLP) methods. The classification results of 100 training data and 50 white blood cell image testing data. Tests on the KNN, NBC, and MLP methods yielded an accuracy of 82%, 80%, and 94%, respectively. Therefore, MLP was chosen as the best classification model in the identification of white blood cells

    Deep Learning Approach for Building Detection Using LiDAR-Orthophoto Fusion

    Full text link
    © 2018 Faten Hamed Nahhas et al. This paper reports on a building detection approach based on deep learning (DL) using the fusion of Light Detection and Ranging (LiDAR) data and orthophotos. The proposed method utilized object-based analysis to create objects, a feature-level fusion, an autoencoder-based dimensionality reduction to transform low-level features into compressed features, and a convolutional neural network (CNN) to transform compressed features into high-level features, which were used to classify objects into buildings and background. The proposed architecture was optimized for the grid search method, and its sensitivity to hyperparameters was analyzed and discussed. The proposed model was evaluated on two datasets selected from an urban area with different building types. Results show that the dimensionality reduction by the autoencoder approach from 21 features to 10 features can improve detection accuracy from 86.06% to 86.19% in the working area and from 77.92% to 78.26% in the testing area. The sensitivity analysis also shows that the selection of the hyperparameter values of the model significantly affects detection accuracy. The best hyperparameters of the model are 128 filters in the CNN model, the Adamax optimizer, 10 units in the fully connected layer of the CNN model, a batch size of 8, and a dropout of 0.2. These hyperparameters are critical to improving the generalization capacity of the model. Furthermore, comparison experiments with the support vector machine (SVM) show that the proposed model with or without dimensionality reduction outperforms the SVM models in the working area. However, the SVM model achieves better accuracy in the testing area than the proposed model without dimensionality reduction. This study generally shows that the use of an autoencoder in DL models can improve the accuracy of building recognition in fused LiDAR-orthophoto data

    Plant Disease Detection in Image Processing Using MATLAB

    Get PDF
    For increasing growth and productivity of crop field, farmers need automatic monitoring of disease of plants instead of manual. Manual monitoring of disease do not give satisfactory result as naked eye observation is old method requires more time for disease recognition also need expert hence it is non effective. So in this paper, we introduced a modern technique to find out disease related to both leaf and fruit. To overcome disadvantages of traditional eye observing technique, we used digital image processing technique for fast and accurate disease detection of plant. In our proposed work, we developed k-means clustering algorithm with multi SVM algorithm in MATLAB software for disease identification and classification

    An Effective Approach for Human Activity Classification Using Feature Fusion and Machine Learning Methods

    Get PDF
    Recent advances in image processing and machine learning methods have greatly enhanced the ability of object classification from images and videos in different applications. Classification of human activities is one of the emerging research areas in the field of computer vision. It can be used in several applications including medical informatics, surveillance, human computer interaction, and task monitoring. In the medical and healthcare field, the classification of patients’ activities is important for providing the required information to doctors and physicians for medication reactions and diagnosis. Nowadays, some research approaches to recognize human activity from videos and images have been proposed using machine learning (ML) and soft computational algorithms. However, advanced computer vision methods are still considered promising development directions for developing human activity classification approach from a sequence of video frames. This paper proposes an effective automated approach using feature fusion and ML methods. It consists of five steps, which are the preprocessing, feature extraction, feature selection, feature fusion, and classification steps. Two available public benchmark datasets are utilized to train, validate, and test ML classifiers of the developed approach. The experimental results of this research work show that the accuracies achieved are 99.5% and 99.9% on the first and second datasets, respectively. Compared with many existing related approaches, the proposed approach attained high performance results in terms of sensitivity, accuracy, precision, and specificity evaluation metric.publishedVersio
    corecore