54 research outputs found

    A Decision Support System For The Intelligence Satellite Analyst

    Get PDF
    The study developed a decision support system known as Visual Analytic Cognitive Model (VACOM) to support the Intelligence Analyst (IA) in satellite information processing task within a Geospatial Intelligence (GEOINT) domain. As a visual analytics, VACOM contains the image processing algorithms, a cognitive network of the IA mental model, and a Bayesian belief model for satellite information processing. A cognitive analysis tool helps to identify eight knowledge levels in a satellite information processing. These are, spatial, prototypical, contextual, temporal, semantic, pragmatic, intentional, and inferential knowledge levels, respectively. A cognitive network was developed for each knowledge level with data input from the subjective questionnaires that probed the analysts’ mental model. VACOM interface was designed to allow the analysts have a transparent view of the processes, including, visualization model, and signal processing model applied to the images, geospatial data representation, and the cognitive network of expert beliefs. VACOM interface allows the user to select a satellite image of interest, select each of the image analysis methods for visualization, and compare ‘ground-truth’ information against the recommendation of VACOM. The interface was designed to enhance perception, cognition, and even comprehension to the multi and complex image analyses by the analysts. A usability analysis on VACOM showed many advantages for the human analysts. These include, reduction in cognitive workload as a result of less information search, the IA can conduct an interactive experiment on each of his/her belief space and guesses, and selection of best image processing algorithms to apply to an image context

    Two and three dimensional segmentation of multimodal imagery

    Get PDF
    The role of segmentation in the realms of image understanding/analysis, computer vision, pattern recognition, remote sensing and medical imaging in recent years has been significantly augmented due to accelerated scientific advances made in the acquisition of image data. This low-level analysis protocol is critical to numerous applications, with the primary goal of expediting and improving the effectiveness of subsequent high-level operations by providing a condensed and pertinent representation of image information. In this research, we propose a novel unsupervised segmentation framework for facilitating meaningful segregation of 2-D/3-D image data across multiple modalities (color, remote-sensing and biomedical imaging) into non-overlapping partitions using several spatial-spectral attributes. Initially, our framework exploits the information obtained from detecting edges inherent in the data. To this effect, by using a vector gradient detection technique, pixels without edges are grouped and individually labeled to partition some initial portion of the input image content. Pixels that contain higher gradient densities are included by the dynamic generation of segments as the algorithm progresses to generate an initial region map. Subsequently, texture modeling is performed and the obtained gradient, texture and intensity information along with the aforementioned initial partition map are used to perform a multivariate refinement procedure, to fuse groups with similar characteristics yielding the final output segmentation. Experimental results obtained in comparison to published/state-of the-art segmentation techniques for color as well as multi/hyperspectral imagery, demonstrate the advantages of the proposed method. Furthermore, for the purpose of achieving improved computational efficiency we propose an extension of the aforestated methodology in a multi-resolution framework, demonstrated on color images. Finally, this research also encompasses a 3-D extension of the aforementioned algorithm demonstrated on medical (Magnetic Resonance Imaging / Computed Tomography) volumes

    A FLEXIBLE SUB-BLOCK IN REGION BASED IMAGE RETRIEVAL BASED ON TRANSITION REGION

    Get PDF
    One of the techniques in region based image retrieval (RBIR) is comparing the global feature of an entire image and the local feature of image’s sub-block in query and database image. The determined sub-block must be able to detect an object with varying sizes and locations. So the sub-block with flexible size and location is needed. We propose a new method for local feature extraction by determining the flexible size and location of sub-block based on the transition region in region based image retrieval. Global features of both query and database image are extracted using invariant moment. Local features in database and query image are extracted using hue, saturation, and value (HSV) histogram and local binary patterns (LBP). There are several steps to extract the local feature of sub-block in the query image. First, preprocessing is conducted to get the transition region, then the flexible sub-block is determined based on the transition region. Afterward, the local feature of sub-block is extracted. The result of this application is the retrieved images ordered by the most similar to the query image. The local feature extraction with the proposed method is effective for image retrieval with precision and recall value are 57%

    Information theoretic thresholding techniques based on particle swarm optimization.

    Get PDF
    In this dissertation, we discuss multi-level image thresholding techniques based on information theoretic entropies. In order to apply the correlation information of neighboring pixels of an image to obtain better segmentation results, we propose several multi-level thresholding models by using Gray-Level & Local-Average histogram (GLLA) and Gray-Level & Local-Variance histogram (GLLV). Firstly, a RGB color image thresholding model based on GLLA histogram and Tsallis-Havrda-Charv\u27at entropy is discussed. We validate the multi-level thresholding criterion function by using mathematical induction. For each component image, we assign the mean value from each thresholded class to obtain three segmented component images independently. Then we obtain the segmented color image by combining the three segmented component images. Secondly, we use the GLLV histogram to propose three novel entropic multi-level thresholding models based on Shannon entropy, R\u27enyi entropy and Tsallis-Havrda-Charv\u27at entropy respectively. Then we apply these models on the three components of a RGB color image to complete the RGB color image segmentation. An entropic thresholding model is mostly about searching for the optimal threshold values by maximizing or minimizing a criterion function. We apply particle swarm optimization (PSO) algorithm to search the optimal threshold values for all the models. We conduct the experiments extensively on The Berkeley Segmentation Dataset and Benchmark (BSDS300) and calculate the average four performance indices (Probability Rand Index, PRI, Global Consistency Error, GCE, Variation of Information, VOI and Boundary Displacement Error, BDE) to show the effectiveness and reasonability of the proposed models

    A deep learning approach to bone segmentation in CT scans

    Get PDF
    This thesis proposes a deep learning approach to bone segmentation in abdominal CT scans. Segmentation is a common initial step in medical images analysis, often fundamental for computer-aided detection and diagnosis systems. The extraction of bones in CT scans is a challenging task, which if done manually by experts requires a time consuming process and that has not today a broadly recognized automatic solution. The method presented is based on a convolutional neural network, inspired by the U-Net and trained end-to-end, that performs a semantic segmentation of the data. The training dataset is made up of 21 abdominal CT scans, each one containing between 403 and 994 2D transversal images. Those images are in full resolution, 512x512 voxels, and each voxel is classified by the network into one of the following classes: background, femoral bones, hips, sacrum, sternum, spine and ribs. The output is therefore a bone mask where the bones are recognized and divided into six different classes. In the testing dataset, labeled by experts, the best model achieves a Dice coefficient as average of all bone classes of 0.93. This work demonstrates, to the best of my knowledge for the first time, the feasibility of automatic bone segmentation and classification for CT scans using a convolutional neural network

    Deep ensemble model-based moving object detection and classification using SAR images

    Get PDF
    In recent decades, image processing and computer vision models have played a vital role in moving object detection on the synthetic aperture radar (SAR) images. Capturing of moving objects in the SAR images is a difficult task. In this study, a new automated model for detecting moving objects is proposed using SAR images. The proposed model has four main steps, namely, preprocessing, segmentation, feature extraction, and classification. Initially, the input SAR image is pre-processed using a histogram equalization technique. Then, the weighted Otsu-based segmentation algorithm is applied for segmenting the object regions from the pre-processed images. When using the weighted Otsu, the segmented grayscale images are not only clear but also retain the detailed features of grayscale images. Next, feature extraction is carried out by gray-level co-occurrence matrix (GLCM), median binary patterns (MBPs), and additive harmonic mean estimated local Gabor binary pattern (AHME-LGBP). The final step is classification using deep ensemble models, where the objects are classified by employing the ensemble deep learning technique, combining the models like the bidirectional long short-term memory (Bi-LSTM), recurrent neural network (RNN), and improved deep belief network (IDBN), which is trained with the features extracted previously. The combined models increase the accuracy of the results significantly. Furthermore, ensemble modeling reduces the variance and modeling method bias, which decreases the chances of overfitting. Compared to a single contributing model, ensemble models perform better and make better predictions. Additionally, an ensemble lessens the spread or dispersion of the model performance and prediction accuracy. Finally, the performance of the proposed model is related to the conventional models with respect to different measures. In the mean-case scenario, the proposed ensemble model has a minimum error value of 0.032, which is better related to other models. In both median- and best-case scenario studies, the ensemble model has a lower error value of 0.029 and 0.015
    • …
    corecore