14,054 research outputs found
Classification of linear codes exploiting an invariant
We consider the problem of computing the equivalence classes of a set of
linear codes. This problem arises when new codes are obtained extending
codes of lower dimension. We propose a technique that, exploiting an
invariant simple to compute, allows to reduce the computational complexity
of the classification process. Using this technique the [13,5,8]_7, the
[14,5,9]_8 and the [15,4,11]_9 codes have been classified. These
classifications enabled us to solve the packing problem for NMDS codes for
q=7,8,9. The same technique can be applied to the problem of the
classification of other structures
Polynomial time algorithms for multicast network code construction
The famous max-flow min-cut theorem states that a source node s can send information through a network (V, E) to a sink node t at a rate determined by the min-cut separating s and t. Recently, it has been shown that this rate can also be achieved for multicasting to several sinks provided that the intermediate nodes are allowed to re-encode the information they receive. We demonstrate examples of networks where the achievable rates obtained by coding at intermediate nodes are arbitrarily larger than if coding is not allowed. We give deterministic polynomial time algorithms and even faster randomized algorithms for designing linear codes for directed acyclic graphs with edges of unit capacity. We extend these algorithms to integer capacities and to codes that are tolerant to edge failures
Lecture notes: Semidefinite programs and harmonic analysis
Lecture notes for the tutorial at the workshop HPOPT 2008 - 10th
International Workshop on High Performance Optimization Techniques (Algebraic
Structure in Semidefinite Programming), June 11th to 13th, 2008, Tilburg
University, The Netherlands.Comment: 31 page
Sparse Modeling for Image and Vision Processing
In recent years, a large amount of multi-disciplinary research has been
conducted on sparse models and their applications. In statistics and machine
learning, the sparsity principle is used to perform model selection---that is,
automatically selecting a simple model among a large collection of them. In
signal processing, sparse coding consists of representing data with linear
combinations of a few dictionary elements. Subsequently, the corresponding
tools have been widely adopted by several scientific communities such as
neuroscience, bioinformatics, or computer vision. The goal of this monograph is
to offer a self-contained view of sparse modeling for visual recognition and
image processing. More specifically, we focus on applications where the
dictionary is learned and adapted to data, yielding a compact representation
that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics
and Visio
Sparse Coding on Symmetric Positive Definite Manifolds using Bregman Divergences
This paper introduces sparse coding and dictionary learning for Symmetric
Positive Definite (SPD) matrices, which are often used in machine learning,
computer vision and related areas. Unlike traditional sparse coding schemes
that work in vector spaces, in this paper we discuss how SPD matrices can be
described by sparse combination of dictionary atoms, where the atoms are also
SPD matrices. We propose to seek sparse coding by embedding the space of SPD
matrices into Hilbert spaces through two types of Bregman matrix divergences.
This not only leads to an efficient way of performing sparse coding, but also
an online and iterative scheme for dictionary learning. We apply the proposed
methods to several computer vision tasks where images are represented by region
covariance matrices. Our proposed algorithms outperform state-of-the-art
methods on a wide range of classification tasks, including face recognition,
action recognition, material classification and texture categorization
Dynamical Generation of Noiseless Quantum Subsystems
We present control schemes for open quantum systems that combine decoupling
and universal control methods with coding procedures. By exploiting a general
algebraic approach, we show how appropriate encodings of quantum states result
in obtaining universal control over dynamically-generated noise-protected
subsystems with limited control resources. In particular, we provide an
efficient scheme for performing universal encoded quantum computation in a wide
class of systems subjected to linear non-Markovian quantum noise and supporting
Heisenberg-type internal Hamiltonians.Comment: 4 pages, no figures; REVTeX styl
Learning Multi-Scale Representations for Material Classification
The recent progress in sparse coding and deep learning has made unsupervised
feature learning methods a strong competitor to hand-crafted descriptors. In
computer vision, success stories of learned features have been predominantly
reported for object recognition tasks. In this paper, we investigate if and how
feature learning can be used for material recognition. We propose two
strategies to incorporate scale information into the learning procedure
resulting in a novel multi-scale coding procedure. Our results show that our
learned features for material recognition outperform hand-crafted descriptors
on the FMD and the KTH-TIPS2 material classification benchmarks
- …