99,896 research outputs found
Classification of interstitial lung disease patterns with topological texture features
Topological texture features were compared in their ability to classify
morphological patterns known as 'honeycombing' that are considered indicative
for the presence of fibrotic interstitial lung diseases in high-resolution
computed tomography (HRCT) images. For 14 patients with known occurrence of
honey-combing, a stack of 70 axial, lung kernel reconstructed images were
acquired from HRCT chest exams. A set of 241 regions of interest of both
healthy and pathological (89) lung tissue were identified by an experienced
radiologist. Texture features were extracted using six properties calculated
from gray-level co-occurrence matrices (GLCM), Minkowski Dimensions (MDs), and
three Minkowski Functionals (MFs, e.g. MF.euler). A k-nearest-neighbor (k-NN)
classifier and a Multilayer Radial Basis Functions Network (RBFN) were
optimized in a 10-fold cross-validation for each texture vector, and the
classification accuracy was calculated on independent test sets as a
quantitative measure of automated tissue characterization. A Wilcoxon
signed-rank test was used to compare two accuracy distributions and the
significance thresholds were adjusted for multiple comparisons by the
Bonferroni correction. The best classification results were obtained by the MF
features, which performed significantly better than all the standard GLCM and
MD features (p < 0.005) for both classifiers. The highest accuracy was found
for MF.euler (97.5%, 96.6%; for the k-NN and RBFN classifier, respectively).
The best standard texture features were the GLCM features 'homogeneity' (91.8%,
87.2%) and 'absolute value' (90.2%, 88.5%). The results indicate that advanced
topological texture features can provide superior classification performance in
computer-assisted diagnosis of interstitial lung diseases when compared to
standard texture analysis methods.Comment: 8 pages, 5 figures, Proceedings SPIE Medical Imaging 201
Automatic detection of small bowel tumors in capsule endoscopy based on color curvelet covariance statistical texture descriptors
Traditional endoscopic methods do not allow the visualization of the entire Gastrointestinal (GI) tract. Wireless Capsule Endoscopy (CE) is a diagnostic procedure that overcomes this limitation of the traditional endoscopic methods. The CE video frames possess rich information about the condition of the stomach and intestine mucosa, encoded as color and texture patterns. It is known for a long time that human perception of texture is based in a multi-scale analysis of patterns, which can be modeled by multi-resolution approaches. Furthermore, modeling the covariance of textural descriptors has been successfully used in classification of colonoscopy videos. Therefore, in the present paper it is proposed a frame classification scheme based on statistical textural descriptors taken from the Discrete Curvelet Transform (DCT) domain, a recent multi-resolution mathematical tool. The DCT is based on an anisotropic notion of scale and high directional sensitivity in multiple directions, being therefore suited to characterization of complex patterns as texture. The covariance of texture descriptors taken at a given detail level, in different angles, is used as classification feature, in a scheme designated as Color Curvelet Covariance. The classification step is performed by a multilayer perceptron neural network. The proposed method has been applied in real data taken from several capsule endoscopic exams and reaches 97.2% of sensitivity and 97.4% specificity. These promising results support the feasibility of the proposed method.Centre Algoritm
Spatial image polynomial decomposition with application to video classification
International audienceThis paper addresses the use of orthogonal polynomial basis transform in video classification due to its multiple advantages, especially for multiscale and multiresolution analysis similar to the wavelet transform. In our approach, we benefit from these advantages to reduce the resolution of the video by using a multiscale/multiresolution decomposition to define a new algorithm that decomposes a color image into geometry and texture component by projecting the image on a bivariate polynomial basis and considering the geometry component as the partial reconstruction and the texture component as the remaining part, and finally to model the features (like motion and texture) extracted from reduced image sequences by projecting them into a bivariate polynomial basis in order to construct a hybrid polynomial motion texture video descriptor. To evaluate our approach, we consider two visual recognition tasks, namely the classification of dynamic textures and recognition of human actions. The experimental section shows that the proposed approach achieves a perfect recognition rate in the Weizmann database and highest accuracy in the Dyntex++ database compared to existing methods
Analysis of GLCM Parameters for Textures Classification on UMD Database Images
Texture analysis is one of the most important techniques that have been used in image processing for many purposes, including image classification. The texture determines the region of a given gray level image, and reflects its relevant information. Several methods of analysis have been invented and developed to deal with texture in recent years, and each one has its own method of extracting features from the texture. These methods can be divided into two main approaches: statistical methods and processing methods. Gray Level Co-occurrence Matrix (GLCM) is the most popular statistical method used to get features from the texture. In addition to GLCM, a number of equations of Haralick characteristics will be used to calculate values used as discriminate features among different images in this study. There are many parameters of GLCM that should be taken into consideration to increase the discrimination between images belonging to different classes. In this study, we aim to evaluate GLCM parameters. For three decades now, GLCM is popular method used for texture analysis. Neural network which is one of supervised methods will also be used as a classifier. And finally, the database for this study will be images prepared from UMD (University of Maryland database)
Binary Patterns Encoded Convolutional Neural Networks for Texture Recognition and Remote Sensing Scene Classification
Designing discriminative powerful texture features robust to realistic
imaging conditions is a challenging computer vision problem with many
applications, including material recognition and analysis of satellite or
aerial imagery. In the past, most texture description approaches were based on
dense orderless statistical distribution of local features. However, most
recent approaches to texture recognition and remote sensing scene
classification are based on Convolutional Neural Networks (CNNs). The d facto
practice when learning these CNN models is to use RGB patches as input with
training performed on large amounts of labeled data (ImageNet). In this paper,
we show that Binary Patterns encoded CNN models, codenamed TEX-Nets, trained
using mapped coded images with explicit texture information provide
complementary information to the standard RGB deep models. Additionally, two
deep architectures, namely early and late fusion, are investigated to combine
the texture and color information. To the best of our knowledge, we are the
first to investigate Binary Patterns encoded CNNs and different deep network
fusion architectures for texture recognition and remote sensing scene
classification. We perform comprehensive experiments on four texture
recognition datasets and four remote sensing scene classification benchmarks:
UC-Merced with 21 scene categories, WHU-RS19 with 19 scene classes, RSSCN7 with
7 categories and the recently introduced large scale aerial image dataset (AID)
with 30 aerial scene types. We demonstrate that TEX-Nets provide complementary
information to standard RGB deep model of the same network architecture. Our
late fusion TEX-Net architecture always improves the overall performance
compared to the standard RGB network on both recognition problems. Our final
combination outperforms the state-of-the-art without employing fine-tuning or
ensemble of RGB network architectures.Comment: To appear in ISPRS Journal of Photogrammetry and Remote Sensin
Unsupervised Classification of Intrusive Igneous Rock Thin Section Images using Edge Detection and Colour Analysis
Classification of rocks is one of the fundamental tasks in a geological
study. The process requires a human expert to examine sampled thin section
images under a microscope. In this study, we propose a method that uses
microscope automation, digital image acquisition, edge detection and colour
analysis (histogram). We collected 60 digital images from 20 standard thin
sections using a digital camera mounted on a conventional microscope. Each
image is partitioned into a finite number of cells that form a grid structure.
Edge and colour profile of pixels inside each cell determine its
classification. The individual cells then determine the thin section image
classification via a majority voting scheme. Our method yielded successful
results as high as 90% to 100% precision.Comment: To appear in 2017 IEEE International Conference On Signal and Image
Processing Application
A robust adaptive wavelet-based method for classification of meningioma histology images
Intra-class variability in the texture of samples is an important problem in the domain of histological image classification. This issue is inherent to the field due to the high complexity of histology image data. A technique that provides good results in one trial may fail in another when the test and training data are changed and therefore, the technique needs to be adapted for intra-class texture variation. In this paper, we present a novel wavelet based multiresolution analysis approach to meningioma subtype classification in response to the challenge of data variation.We analyze the stability of Adaptive Discriminant Wavelet Packet Transform (ADWPT) and present a solution to the issue of variation in the ADWPT decomposition when texture in data changes. A feature selection approach is proposed that provides high classification accuracy
A Self-Organizing Neural System for Learning to Recognize Textured Scenes
A self-organizing ARTEX model is developed to categorize and classify textured image regions. ARTEX specializes the FACADE model of how the visual cortex sees, and the ART model of how temporal and prefrontal cortices interact with the hippocampal system to learn visual recognition categories and their names. FACADE processing generates a vector of boundary and surface properties, notably texture and brightness properties, by utilizing multi-scale filtering, competition, and diffusive filling-in. Its context-sensitive local measures of textured scenes can be used to recognize scenic properties that gradually change across space, as well a.s abrupt texture boundaries. ART incrementally learns recognition categories that classify FACADE output vectors, class names of these categories, and their probabilities. Top-down expectations within ART encode learned prototypes that pay attention to expected visual features. When novel visual information creates a poor match with the best existing category prototype, a memory search selects a new category with which classify the novel data. ARTEX is compared with psychophysical data, and is benchmarked on classification of natural textures and synthetic aperture radar images. It outperforms state-of-the-art systems that use rule-based, backpropagation, and K-nearest neighbor classifiers.Defense Advanced Research Projects Agency; Office of Naval Research (N00014-95-1-0409, N00014-95-1-0657
- …