3,535 research outputs found
An Iteratively Decodable Tensor Product Code with Application to Data Storage
The error pattern correcting code (EPCC) can be constructed to provide a
syndrome decoding table targeting the dominant error events of an inter-symbol
interference channel at the output of the Viterbi detector. For the size of the
syndrome table to be manageable and the list of possible error events to be
reasonable in size, the codeword length of EPCC needs to be short enough.
However, the rate of such a short length code will be too low for hard drive
applications. To accommodate the required large redundancy, it is possible to
record only a highly compressed function of the parity bits of EPCC's tensor
product with a symbol correcting code. In this paper, we show that the proposed
tensor error-pattern correcting code (T-EPCC) is linear time encodable and also
devise a low-complexity soft iterative decoding algorithm for EPCC's tensor
product with q-ary LDPC (T-EPCC-qLDPC). Simulation results show that
T-EPCC-qLDPC achieves almost similar performance to single-level qLDPC with a
1/2 KB sector at 50% reduction in decoding complexity. Moreover, 1 KB
T-EPCC-qLDPC surpasses the performance of 1/2 KB single-level qLDPC at the same
decoder complexity.Comment: Hakim Alhussien, Jaekyun Moon, "An Iteratively Decodable Tensor
Product Code with Application to Data Storage
On the design of an ECOC-compliant genetic algorithm
Genetic Algorithms (GA) have been previously applied to Error-Correcting Output Codes (ECOC) in state-of-the-art works in order to find a suitable coding matrix. Nevertheless, none of the presented techniques directly take into account the properties of the ECOC matrix. As a result the considered search space is unnecessarily large. In this paper, a novel Genetic strategy to optimize the ECOC coding step is presented. This novel strategy redefines the usual crossover and mutation operators in order to take into account the theoretical properties of the ECOC framework. Thus, it reduces the search space and lets the algorithm to converge faster. In addition, a novel operator that is able to enlarge the code in a smart way is introduced. The novel methodology is tested on several UCI datasets and four challenging computer vision problems. Furthermore, the analysis of the results done in terms of performance, code length and number of Support Vectors shows that the optimization process is able to find very efficient codes, in terms of the trade-off between classification performance and the number of classifiers. Finally, classification performance per dichotomizer results shows that the novel proposal is able to obtain similar or even better results while defining a more compact number of dichotomies and SVs compared to state-of-the-art approaches
Estimating Receptive Fields from Responses to Natural Stimuli with Asymmetric Intensity Distributions
The reasons for using natural stimuli to study sensory function are quickly mounting, as recent studies have revealed important differences in neural responses to natural and artificial stimuli. However, natural stimuli typically contain strong correlations and are spherically asymmetric (i.e. stimulus intensities are not symmetrically distributed around the mean), and these statistical complexities can bias receptive field (RF) estimates when standard techniques such as spike-triggered averaging or reverse correlation are used. While a number of approaches have been developed to explicitly correct the bias due to stimulus correlations, there is no complementary technique to correct the bias due to stimulus asymmetries. Here, we develop a method for RF estimation that corrects reverse correlation RF estimates for the spherical asymmetries present in natural stimuli. Using simulated neural responses, we demonstrate how stimulus asymmetries can bias reverse-correlation RF estimates (even for uncorrelated stimuli) and illustrate how this bias can be removed by explicit correction. We demonstrate the utility of the asymmetry correction method under experimental conditions by estimating RFs from the responses of retinal ganglion cells to natural stimuli and using these RFs to predict responses to novel stimuli
Contextual Bag-Of-Visual-Words and ECOC-Rank for Retrieval and Multi-class Object Recognition
Projecte Final de MĂ ster UPC realitzat en col.laboraciĂł amb Dept. MatemĂ tica Aplicada i AnĂ lisi, Universitat de BarcelonaMulti-class object categorization is an important line of research in Computer Vision
and Pattern Recognition fields. An artificial intelligent system is able to interact with its environment if it is able to distinguish among a set of cases, instances, situations, objects, etc. The World is inherently multi-class, and thus, the eficiency
of a system can be determined by its accuracy discriminating among a set of cases.
A recently applied procedure in the literature is the Bag-Of-Visual-Words (BOVW).
This methodology is based on the natural language processing theory, where a set of
sentences are defined based on word frequencies. Analogy, in the pattern recognition
domain, an object is described based on the frequency of its parts appearance.
However, a general drawback of this method is that the dictionary construction
does not take into account geometrical information about object parts. In order to
include parts relations in the BOVW model, we propose the Contextual BOVW
(C-BOVW), where the dictionary construction is guided by a geometricaly-based
merging procedure. As a result, objects are described as sentences where geometrical
information is implicitly considered.
In order to extend the proposed system to the multi-class case, we used the
Error-Correcting Output Codes framework (ECOC). State-of-the-art multi-class
techniques are frequently defined as an ensemble of binary classifiers. In this sense, the ECOC framework, based on error-correcting principles, showed to be a powerful tool, being able to classify a huge number of classes at the same time that corrects classification errors produced by the individual learners.
In our case, the C-BOVW sentences are learnt by means of an ECOC configuration, obtaining high discriminative power. Moreover, we used the ECOC outputs obtained by the new methodology to rank classes. In some situations, more than
one label is required to work with multiple hypothesis and find similar cases, such
as in the well-known retrieval problems. In this sense, we also included contextual
and semantic information to modify the ECOC outputs and defined an ECOC-rank methodology. Altering the ECOC output values by means of the adjacency of
classes based on features and classes relations based on ontologies, we also reporteda significant improvement in class-retrieval problems
Secure quantum key distribution using squeezed states
We prove the security of a quantum key distribution scheme based on
transmission of squeezed quantum states of a harmonic oscillator. Our proof
employs quantum error-correcting codes that encode a finite-dimensional quantum
system in the infinite-dimensional Hilbert space of an oscillator, and protect
against errors that shift the canonical variables p and q. If the noise in the
quantum channel is weak, squeezing signal states by 2.51 dB (a squeeze factor
e^r=1.34) is sufficient in principle to ensure the security of a protocol that
is suitably enhanced by classical error correction and privacy amplification.
Secure key distribution can be achieved over distances comparable to the
attenuation length of the quantum channel.Comment: 19 pages, 3 figures, RevTeX and epsf, new section on channel losse
Beyond One-hot Encoding: lower dimensional target embedding
Target encoding plays a central role when learning Convolutional Neural
Networks. In this realm, One-hot encoding is the most prevalent strategy due to
its simplicity. However, this so widespread encoding schema assumes a flat
label space, thus ignoring rich relationships existing among labels that can be
exploited during training. In large-scale datasets, data does not span the full
label space, but instead lies in a low-dimensional output manifold. Following
this observation, we embed the targets into a low-dimensional space,
drastically improving convergence speed while preserving accuracy. Our
contribution is two fold: (i) We show that random projections of the label
space are a valid tool to find such lower dimensional embeddings, boosting
dramatically convergence rates at zero computational cost; and (ii) we propose
a normalized eigenrepresentation of the class manifold that encodes the targets
with minimal information loss, improving the accuracy of random projections
encoding while enjoying the same convergence rates. Experiments on CIFAR-100,
CUB200-2011, Imagenet, and MIT Places demonstrate that the proposed approach
drastically improves convergence speed while reaching very competitive accuracy
rates.Comment: Published at Image and Vision Computin
Error-Correcting Neural Sequence Prediction
We propose a novel neural sequence prediction method based on \textit{error-correcting output codes} that avoids exact softmax normalization and allows for a tradeoff between speed and performance. Instead of minimizing measures between the predicted probability distribution and true distribution, we use error-correcting codes to represent both predictions and outputs. Secondly, we propose multiple ways to improve accuracy and convergence rates by maximizing the separability between codes that correspond to classes proportional to word embedding similarities. Lastly, we introduce our main contribution called \textit{Latent Variable Mixture Sampling}, a technique that is used to mitigate exposure bias, which can be integrated into training latent variable-based neural sequence predictors such as ECOC. This involves mixing the latent codes of past predictions and past targets in one of two ways: (1) according to a predefined sampling schedule or (2) a differentiable sampling procedure whereby the mixing probability is learned throughout training by replacing the greedy argmax operation with a smooth approximation. ECOC-NSP leads to consistent improvements on language modelling datasets and the proposed Latent Variable mixture sampling methods are found to perform well for text generation tasks such as image captioning
- âŠ