3 research outputs found

    PCA BASED CLASSIFICATION OF SINGLE-LAYERED CLOUD TYPES

    Get PDF
    The paper presents an automatic classification system, which discriminates the different types of single-layered clouds using Principal Component Analysis (PCA) with enhanced accuracy as compared to other techniques. PCA is an image classification technique, which is typically used for face recognition. PCA can be used to identify the image features called principal components. A principal component is a peculiar feature of an image. The approach described in this paper uses this PCA capability for enhancing the accuracy of cloud image analysis. To demonstrate this enhancement, a software classifier system has been developed that incorporates PCA capability for better discrimination of cloud images. The system is first trained by cloud images. In training phase, system reads major principal features of the different cloud images to produce an image space. In testing phase, a new cloud image can be classified by comparing it with the specified image space using the PCA algorithm

    Satellite Image Classification Using Moment and SVD Method

    Get PDF
    The motivation we address in this paper is to classify satellite image using the moment and singular value decomposition (SVD) method; both proposed methods are consisted of two phases; the enrollment and classification. The enrollment phase aims to extract the image classes to be stored in dataset as a training data. Since the SVD method is supervised method, it cannot enroll the intended dataset, instead, the moment based K-means was used to build the dataset. Thereby, the enrollment phase began with partitioning the image into uniform sized blocks, and estimating the moment for each image block. The moment is the feature by which the image blocks were grouped. Then, K-means is used to cluster the image blocks and determining the number of cluster and centroid of each cluster. The image block corresponding to these centroids were stored in the dataset to be used in the classification phase. The results of enrollment phase showed that the image contains five distinct classes, they are; water, vegetation, residential without vegetation, residential with vegetation, and open land. The classification phase consisted of multi stages; image composition, image transform, image partitioning, feature extraction, and then image classification. The SVD classification method used the dataset to estimate the classification feature SVD and compute the similarity measure for each block in the image, while the moment classification method used the dataset to compute the mean of each column and compute the similarity measure for each pixel in the image. The results assessment was carried out on the two classification paths by comparing the results with a reference classified image achieved by Iraqi Geological Surveying Corporation (GSC). The comparison process is done pixel by pixel for whole the considered image and computing some evaluation measurements. It was found that the classification method was high quality performed and the results showed acceptable classification scores. In the SVD method, the score was about 70.64%, and it is possible to rise up to 81.833% when assuming both classes: residential without vegetation and residential with vegetation are one class.Whereas, the classification score was about 95.84% when using the moment method. This encourage results indicates the ability of proposed methods to efficient classifying multibands satellite image.

    A New Approach to Automatic Saliency Identification in Images Based on Irregularity of Regions

    Get PDF
    This research introduces an image retrieval system which is, in different ways, inspired by the human vision system. The main problems with existing machine vision systems and image understanding are studied and identified, in order to design a system that relies on human image understanding. The main improvement of the developed system is that it uses the human attention principles in the process of image contents identification. Human attention shall be represented by saliency extraction algorithms, which extract the salient regions or in other words, the regions of interest. This work presents a new approach for the saliency identification which relies on the irregularity of the region. Irregularity is clearly defined and measuring tools developed. These measures are derived from the formality and variation of the region with respect to the surrounding regions. Both local and global saliency have been studied and appropriate algorithms were developed based on the local and global irregularity defined in this work. The need for suitable automatic clustering techniques motivate us to study the available clustering techniques and to development of a technique that is suitable for salient points clustering. Based on the fact that humans usually look at the surrounding region of the gaze point, an agglomerative clustering technique is developed utilising the principles of blobs extraction and intersection. Automatic thresholding was needed in different stages of the system development. Therefore, a Fuzzy thresholding technique was developed. Evaluation methods of saliency region extraction have been studied and analysed; subsequently we have developed evaluation techniques based on the extracted regions (or points) and compared them with the ground truth data. The proposed algorithms were tested against standard datasets and compared with the existing state-of-the-art algorithms. Both quantitative and qualitative benchmarking are presented in this thesis and a detailed discussion for the results has been included. The benchmarking showed promising results in different algorithms. The developed algorithms have been utilised in designing an integrated saliency-based image retrieval system which uses the salient regions to give a description for the scene. The system auto-labels the objects in the image by identifying the salient objects and gives labels based on the knowledge database contents. In addition, the system identifies the unimportant part of the image (background) to give a full description for the scene
    corecore