21,039 research outputs found
Optimizing linear discriminant error correcting output codes using particle swarm optimization
Error Correcting Output Codes reveal an efficient strategy in dealing with multi-class classification problems. According to this technique, a multi-class problem is decomposed into several binary ones. On these created sub-problems we apply binary classifiers and then, by combining the acquired solutions, we are able to solve the initial multi-class problem. In this paper we consider the optimization of the Linear Discriminant Error Correcting Output Codes framework using Particle Swarm Optimization. In particular, we apply the Particle Swarm Optimization algorithm in order to optimally select the free parameters that control the split of the initial problem's classes into sub-classes. Moreover, by using the Support Vector Machine as classifier we can additionally apply the Particle Swarm Optimization algorithm to tune its free parameters. Our experimental results show that by applying Particle Swarm Optimization on the Sub-class Linear Discriminant Error Correcting Output Codes framework we get a significant improvement in the classification performance. © 2011 Springer-Verlag
Error-Correcting Factorization
Error Correcting Output Codes (ECOC) is a successful technique in multi-class
classification, which is a core problem in Pattern Recognition and Machine
Learning. A major advantage of ECOC over other methods is that the multi- class
problem is decoupled into a set of binary problems that are solved
independently. However, literature defines a general error-correcting
capability for ECOCs without analyzing how it distributes among classes,
hindering a deeper analysis of pair-wise error-correction. To address these
limitations this paper proposes an Error-Correcting Factorization (ECF) method,
our contribution is three fold: (I) We propose a novel representation of the
error-correction capability, called the design matrix, that enables us to build
an ECOC on the basis of allocating correction to pairs of classes. (II) We
derive the optimal code length of an ECOC using rank properties of the design
matrix. (III) ECF is formulated as a discrete optimization problem, and a
relaxed solution is found using an efficient constrained block coordinate
descent approach. (IV) Enabled by the flexibility introduced with the design
matrix we propose to allocate the error-correction on classes that are prone to
confusion. Experimental results in several databases show that when allocating
the error-correction to confusable classes ECF outperforms state-of-the-art
approaches.Comment: Under review at TPAM
Some Applications of Coding Theory in Computational Complexity
Error-correcting codes and related combinatorial constructs play an important
role in several recent (and old) results in computational complexity theory. In
this paper we survey results on locally-testable and locally-decodable
error-correcting codes, and their applications to complexity theory and to
cryptography.
Locally decodable codes are error-correcting codes with sub-linear time
error-correcting algorithms. They are related to private information retrieval
(a type of cryptographic protocol), and they are used in average-case
complexity and to construct ``hard-core predicates'' for one-way permutations.
Locally testable codes are error-correcting codes with sub-linear time
error-detection algorithms, and they are the combinatorial core of
probabilistically checkable proofs
Heuristic Ternary Error-Correcting Output Codes Via Weight Optimization and Layered Clustering-Based Approach
One important classifier ensemble for multiclass classification problems is
Error-Correcting Output Codes (ECOCs). It bridges multiclass problems and
binary-class classifiers by decomposing multiclass problems to a serial
binary-class problems. In this paper, we present a heuristic ternary code,
named Weight Optimization and Layered Clustering-based ECOC (WOLC-ECOC). It
starts with an arbitrary valid ECOC and iterates the following two steps until
the training risk converges. The first step, named Layered Clustering based
ECOC (LC-ECOC), constructs multiple strong classifiers on the most confusing
binary-class problem. The second step adds the new classifiers to ECOC by a
novel Optimized Weighted (OW) decoding algorithm, where the optimization
problem of the decoding is solved by the cutting plane algorithm. Technically,
LC-ECOC makes the heuristic training process not blocked by some difficult
binary-class problem. OW decoding guarantees the non-increase of the training
risk for ensuring a small code length. Results on 14 UCI datasets and a music
genre classification problem demonstrate the effectiveness of WOLC-ECOC
A New Approach in Persian Handwritten Letters Recognition Using Error Correcting Output Coding
Classification Ensemble, which uses the weighed polling of outputs, is the
art of combining a set of basic classifiers for generating high-performance,
robust and more stable results. This study aims to improve the results of
identifying the Persian handwritten letters using Error Correcting Output
Coding (ECOC) ensemble method. Furthermore, the feature selection is used to
reduce the costs of errors in our proposed method. ECOC is a method for
decomposing a multi-way classification problem into many binary classification
tasks; and then combining the results of the subtasks into a hypothesized
solution to the original problem. Firstly, the image features are extracted by
Principal Components Analysis (PCA). After that, ECOC is used for
identification the Persian handwritten letters which it uses Support Vector
Machine (SVM) as the base classifier. The empirical results of applying this
ensemble method using 10 real-world data sets of Persian handwritten letters
indicate that this method has better results in identifying the Persian
handwritten letters than other ensemble methods and also single
classifications. Moreover, by testing a number of different features, this
paper found that we can reduce the additional cost in feature selection stage
by using this method.Comment: Journal of Advances in Computer Researc
Optimizing class partitioning in multi-class classification using a descriptive control language
Many of the best statistical classification algorithms are binary
classifiers, that is they can only distinguish between one of two classes. The
number of possible ways of generalizing binary classification to multi-class
increases exponentially with the number of classes. There is some indication
that the best method of doing so will depend on the dataset. As such, we are
particularly interested in data-driven solution design, whether based on prior
considerations or on empirical examination of the data. Here we demonstrate how
a recursive control language can be used to describe a multitude of different
partitioning strategies in multi-class classification, including those in most
common use. We use it both to manually construct new partitioning
configurations as well as to examine those that have been automatically
designed. Eight different strategies are tested on eight different datasets
using both support vector machines (SVM) as well as logistic regression as the
base binary classifiers. Numerical tests suggest that a one-size-fits-all
solution consisting of one-versus-one is appropriate for most datasets however
one dataset benefitted from the techniques applied in this paper. The best
solution exploited a property of the dataset to produce an uncertainty
coefficient 36\% higher (0.016 absolute gain) than one-vs.-one. Adaptive
solutions that empirically examined the data also produced gains over
one-vs.-one while also being faster.Comment: Changed title and abstract, removed section on quadratic
optimization; other than that the content is mostly the sam
Quantum Subsystems: Exploring the Complementarity of Quantum Privacy and Error Correction
This paper addresses and expands on the contents of the recent Letter [Phys.
Rev. Lett. 111, 030502 (2013)] discussing private quantum subsystems. Here we
prove several previously presented results, including a condition for a given
random unitary channel to not have a private subspace (although this does not
mean that private communication cannot occur, as was previously demonstrated
via private subsystems) and algebraic conditions that characterize when a
general quantum subsystem or subspace code is private for a quantum channel.
These conditions can be regarded as the private analogue of the Knill-Laflamme
conditions for quantum error correction, and we explore how the conditions
simplify in some special cases. The bridge between quantum cryptography and
quantum error correction provided by complementary quantum channels motivates
the study of a new, more general definition of quantum error correcting code,
and we initiate this study here. We also consider the concept of
complementarity for the general notion of private quantum subsystem
Quantum error-correcting codes associated with graphs
We present a construction scheme for quantum error correcting codes. The
basic ingredients are a graph and a finite abelian group, from which the code
can explicitly be obtained. We prove necessary and sufficient conditions for
the graph such that the resulting code corrects a certain number of errors.
This allows a simple verification of the 1-error correcting property of
fivefold codes in any dimension. As new examples we construct a large class of
codes saturating the singleton bound, as well as a tenfold code detecting 3
errors.Comment: 8 pages revtex, 5 figure
Stabilizer codes can be realized as graph codes
We establish the connection between a recent new construction technique for
quantum error correcting codes, based on graphs, and the so-called stabilizer
codes: Each stabilizer code can be realized as a graph code and vice versa.Comment: 7 pages (RevTeX), 7 figure
Expander-like Codes based on Finite Projective Geometry
We present a novel error correcting code and decoding algorithm which have
construction similar to expander codes. The code is based on a bipartite graph
derived from the subsumption relations of finite projective geometry, and
Reed-Solomon codes as component codes. We use a modified version of well-known
Zemor's decoding algorithm for expander codes, for decoding our codes. By
derivation of geometric bounds rather than eigenvalue bounds, it has been
proved that for practical values of the code rate, the random error correction
capability of our codes is much better than those derived for previously
studied graph codes, including Zemor's bound. MATLAB simulations further reveal
that the average case performance of this code is 10 times better than these
geometric bounds obtained, in almost 99% of the test cases. By exploiting the
symmetry of projective space lattices, we have designed a corresponding decoder
that has optimal throughput. The decoder design has been prototyped on Xilinx
Virtex 5 FPGA. The codes are designed for potential applications in secondary
storage media. As an application, we also discuss usage of these codes to
improve the burst error correction capability of CD-ROM decoder
- …