1,772,134 research outputs found
Pattern classification using a linear associative memory
Pattern classification is a very important image processing task. A typical pattern classification algorithm can be broken into two parts; first, the pattern features are extracted and, second, these features are compared with a stored set of reference features until a match is found. In the second part, usually one of the several clustering algorithms or similarity measures is applied. In this paper, a new application of linear associative memory (LAM) to pattern classification problems is introduced. Here, the clustering algorithms or similarity measures are replaced by a LAM matrix multiplication. With a LAM, the reference features need not be separately stored. Since the second part of most classification algorithms is similar, a LAM standardizes the many clustering algorithms and also allows for a standard digital hardware implementation. Computer simulations on regular textures using a feature extraction algorithm achieved a high percentage of successful classification. In addition, this classification is independent of topological transformations
Multiple pattern classification by sparse subspace decomposition
A robust classification method is developed on the basis of sparse subspace
decomposition. This method tries to decompose a mixture of subspaces of
unlabeled data (queries) into class subspaces as few as possible. Each query is
classified into the class whose subspace significantly contributes to the
decomposed subspace. Multiple queries from different classes can be
simultaneously classified into their respective classes. A practical greedy
algorithm of the sparse subspace decomposition is designed for the
classification. The present method achieves high recognition rate and robust
performance exploiting joint sparsity.Comment: 8 pages, 3 figures, 2nd IEEE International Workshop on Subspace
Methods, Workshop Proceedings of ICCV 200
Generalization of form in visual pattern classification.
Human observers were trained to criterion in classifying compound Gabor signals with sym- metry relationships, and were then tested with each of 18 blob-only versions of the learning set. General- ization to dark-only and light-only blob versions of the learning signals, as well as to dark-and-light blob versions was found to be excellent, thus implying virtually perfect generalization of the ability to classify mirror-image signals. The hypothesis that the learning signals are internally represented in terms of a 'blob code' with explicit labelling of contrast polarities was tested by predicting observed generalization behaviour in terms of various types of signal representations (pixelwise, Laplacian pyramid, curvature pyramid, ON/OFF, local maxima of Laplacian and curvature operators) and a minimum-distance rule. Most representations could explain generalization for dark-only and light-only blob patterns but not for the high-thresholded versions thereof. This led to the proposal of a structure-oriented blob-code. Whether such a code could be used in conjunction with simple classifiers or should be transformed into a propo- sitional scheme of representation operated upon by a rule-based classification process remains an open question
Adaptive imputation of missing values for incomplete pattern classification
In classification of incomplete pattern, the missing values can either play a
crucial role in the class determination, or have only little influence (or
eventually none) on the classification results according to the context. We
propose a credal classification method for incomplete pattern with adaptive
imputation of missing values based on belief function theory. At first, we try
to classify the object (incomplete pattern) based only on the available
attribute values. As underlying principle, we assume that the missing
information is not crucial for the classification if a specific class for the
object can be found using only the available information. In this case, the
object is committed to this particular class. However, if the object cannot be
classified without ambiguity, it means that the missing values play a main role
for achieving an accurate classification. In this case, the missing values will
be imputed based on the K-nearest neighbor (K-NN) and self-organizing map (SOM)
techniques, and the edited pattern with the imputation is then classified. The
(original or edited) pattern is respectively classified according to each
training class, and the classification results represented by basic belief
assignments are fused with proper combination rules for making the credal
classification. The object is allowed to belong with different masses of belief
to the specific classes and meta-classes (which are particular disjunctions of
several single classes). The credal classification captures well the
uncertainty and imprecision of classification, and reduces effectively the rate
of misclassifications thanks to the introduction of meta-classes. The
effectiveness of the proposed method with respect to other classical methods is
demonstrated based on several experiments using artificial and real data sets
The Cumulative Distribution Transform and Linear Pattern Classification
Discriminating data classes emanating from sensors is an important problem
with many applications in science and technology. We describe a new transform
for pattern identification that interprets patterns as probability density
functions, and has special properties with regards to classification. The
transform, which we denote as the Cumulative Distribution Transform (CDT) is
invertible, with well defined forward and inverse operations. We show that it
can be useful in `parsing out' variations (confounds) that are `Lagrangian'
(displacement and intensity variations) by converting these to `Eulerian'
(intensity variations) in transform space. This conversion is the basis for our
main result that describes when the CDT can allow for linear classification to
be possible in transform space. We also describe several properties of the
transform and show, with computational experiments that used both real and
simulated data, that the CDT can help render a variety of real world problems
simpler to solve
Hierarchical incremental class learning with reduced pattern training
Hierarchical Incremental Class Learning (HICL) is a new task decomposition method that addresses the pattern classification problem. HICL is proven to be a good classifier but closer examination reveals areas for potential improvement. This paper proposes a theoretical model to evaluate the performance of HICL and presents an approach to improve the classification accuracy of HICL by applying the concept of Reduced Pattern Training (RPT). The theoretical analysis shows that HICL can achieve better classification accuracy than Output Parallelism [1]. The procedure for RPT is described and compared with the original training procedure. RPT reduces systematically the size of the training data set based on the order of sub-networks built. The results from four benchmark classification problems show much promise for the improved model
- …
