7,838 research outputs found
Analysis of GLCM Parameters for Textures Classification on UMD Database Images
Texture analysis is one of the most important techniques that have been used in image processing for many purposes, including image classification. The texture determines the region of a given gray level image, and reflects its relevant information. Several methods of analysis have been invented and developed to deal with texture in recent years, and each one has its own method of extracting features from the texture. These methods can be divided into two main approaches: statistical methods and processing methods. Gray Level Co-occurrence Matrix (GLCM) is the most popular statistical method used to get features from the texture. In addition to GLCM, a number of equations of Haralick characteristics will be used to calculate values used as discriminate features among different images in this study. There are many parameters of GLCM that should be taken into consideration to increase the discrimination between images belonging to different classes. In this study, we aim to evaluate GLCM parameters. For three decades now, GLCM is popular method used for texture analysis. Neural network which is one of supervised methods will also be used as a classifier. And finally, the database for this study will be images prepared from UMD (University of Maryland database)
Generalization of form in visual pattern classification.
Human observers were trained to criterion in classifying compound Gabor signals with sym- metry relationships, and were then tested with each of 18 blob-only versions of the learning set. General- ization to dark-only and light-only blob versions of the learning signals, as well as to dark-and-light blob versions was found to be excellent, thus implying virtually perfect generalization of the ability to classify mirror-image signals. The hypothesis that the learning signals are internally represented in terms of a 'blob code' with explicit labelling of contrast polarities was tested by predicting observed generalization behaviour in terms of various types of signal representations (pixelwise, Laplacian pyramid, curvature pyramid, ON/OFF, local maxima of Laplacian and curvature operators) and a minimum-distance rule. Most representations could explain generalization for dark-only and light-only blob patterns but not for the high-thresholded versions thereof. This led to the proposal of a structure-oriented blob-code. Whether such a code could be used in conjunction with simple classifiers or should be transformed into a propo- sitional scheme of representation operated upon by a rule-based classification process remains an open question
Improved texture image classification through the use of a corrosion-inspired cellular automaton
In this paper, the problem of classifying synthetic and natural texture
images is addressed. To tackle this problem, an innovative method is proposed
that combines concepts from corrosion modeling and cellular automata to
generate a texture descriptor. The core processes of metal (pitting) corrosion
are identified and applied to texture images by incorporating the basic
mechanisms of corrosion in the transition function of the cellular automaton.
The surface morphology of the image is analyzed before and during the
application of the transition function of the cellular automaton. In each
iteration the cumulative mass of corroded product is obtained to construct each
of the attributes of the texture descriptor. In a final step, this texture
descriptor is used for image classification by applying Linear Discriminant
Analysis. The method was tested on the well-known Brodatz and Vistex databases.
In addition, in order to verify the robustness of the method, its invariance to
noise and rotation were tested. To that end, different variants of the original
two databases were obtained through addition of noise to and rotation of the
images. The results showed that the method is effective for texture
classification according to the high success rates obtained in all cases. This
indicates the potential of employing methods inspired on natural phenomena in
other fields.Comment: 13 pages, 14 figure
Changes in navigational behaviour produced by a wide field of view and a high fidelity visual scene
The difficulties people frequently have navigating in virtual environments (VEs) are well known. Usually these
difficulties are quantified in terms of performance (e.g., time taken or number of errors made in following a path),
with these data used to compare navigation in VEs to equivalent real-world settings. However, an important cause
of any performance differences is changes in people’s navigational behaviour. This paper reports a study that
investigated the effect of visual scene fidelity and field of view (FOV) on participants’ behaviour in a navigational
search task, to help identify the thresholds of fidelity that are required for efficient VE navigation. With a wide FOV
(144 degrees), participants spent significantly larger proportion of their time travelling through the VE, whereas
participants who used a normal FOV (48 degrees) spent significantly longer standing in one place planning where
to travel. Also, participants who used a wide FOV and a high fidelity scene came significantly closer to conducting
the search "perfectly" (visiting each place once). In an earlier real-world study, participants completed 93% of
their searches perfectly and planned where to travel while they moved. Thus, navigating a high fidelity VE with
a wide FOV increased the similarity between VE and real-world navigational behaviour, which has important
implications for both VE design and understanding human navigation.
Detailed analysis of the errors that participants made during their non-perfect searches highlighted a dramatic
difference between the two FOVs. With a narrow FOV participants often travelled right past a target without it
appearing on the display, whereas with the wide FOV targets that were displayed towards the sides of participants
overall FOV were often not searched, indicating a problem with the demands made by such a wide FOV display
on human visual attention
- …