647 research outputs found

    Hadamard Equiangular Tight Frames

    Get PDF
    An equiangular tight frame (ETF) is a type of optimal packing of lines in Euclidean space. They are often represented as the columns of a short, fat matrix. In certain applications we want this matrix to be flat, that is, have the property that all of its entries have modulus one. In particular, real flat ETFs are equivalent to self-complementary binary codes that achieve the Grey-Rankin bound. Some flat ETFs are (complex) Hadamard ETFs, meaning they arise by extracting rows from a (complex) Hadamard matrix. These include harmonic ETFs, which are obtained by extracting the rows of a character table that correspond to a difference set in the underlying finite abelian group. In this paper, we give some new results about flat ETFs. One of these results gives an explicit Naimark complement for all Steiner ETFs, which in turn implies that all Kirkman ETFs are possibly-complex Hadamard ETFs. This in particular produces a new infinite family of real flat ETFs. Another result establishes an equivalence between real flat ETFs and certain types of quasi-symmetric designs, resulting in a new infinite family of such designs

    Decoding Protocols for Classical Communication on Quantum Channels

    Full text link
    We study the problem of decoding classical information encoded on quantum states at the output of a quantum channel, with particular focus on increasing the communication rates towards the maximum allowed by Quantum Mechanics. After a brief introduction to the main theoretical formalism employed in the rest of the thesis, i.e., continuous-variable Quantum Information Theory and Quantum Communication Theory, we consider several decoding schemes. First, we treat the problem from an abstract perspective, presenting a method to decompose any quantum measurement into a sequence of easier nested measurements through a binary-tree search. Furthermore we show that this decomposition can be used to build a capacity-achieving decoding protocol for classical communication on quantum channels and to solve the optimal discrimination of some sets of quantum states. These results clarify the structure of optimal quantum measurements, showing that it can be recast in a more operational and experimentally-oriented fashion. Second, we consider a more practical approach and describe three receiver structures for coherent states of the electromagnetic field with applications to single-mode state discrimination and multi-mode decoding at the output of a quantum channel. We treat the problem bearing in mind the technological limitations faced nowadays in the field of optical communications: we evaluate the performance of general decoding schemes based on such technology and report increased performance of two schemes, the first one employing a non-Gaussian transformation and the second one employing a code tailored to be read out easily by the most common detectors. Eventually we characterize a large class of multi-mode adaptive receivers based on common technological resources, obtaining a no-go theorem for their capacity.Comment: PhD thesis. 171 pages, 16 figure

    Small Width, Low Distortions: Quantized Random Embeddings of Low-complexity Sets

    Full text link
    Under which conditions and with which distortions can we preserve the pairwise-distances of low-complexity vectors, e.g., for structured sets such as the set of sparse vectors or the one of low-rank matrices, when these are mapped in a finite set of vectors? This work addresses this general question through the specific use of a quantized and dithered random linear mapping which combines, in the following order, a sub-Gaussian random projection in RM\mathbb R^M of vectors in RN\mathbb R^N, a random translation, or "dither", of the projected vectors and a uniform scalar quantizer of resolution δ>0\delta>0 applied componentwise. Thanks to this quantized mapping we are first able to show that, with high probability, an embedding of a bounded set KRN\mathcal K \subset \mathbb R^N in δZM\delta \mathbb Z^M can be achieved when distances in the quantized and in the original domains are measured with the 1\ell_1- and 2\ell_2-norm, respectively, and provided the number of quantized observations MM is large before the square of the "Gaussian mean width" of K\mathcal K. In this case, we show that the embedding is actually "quasi-isometric" and only suffers of both multiplicative and additive distortions whose magnitudes decrease as M1/5M^{-1/5} for general sets, and as M1/2M^{-1/2} for structured set, when MM increases. Second, when one is only interested in characterizing the maximal distance separating two elements of K\mathcal K mapped to the same quantized vector, i.e., the "consistency width" of the mapping, we show that for a similar number of measurements and with high probability this width decays as M1/4M^{-1/4} for general sets and as 1/M1/M for structured ones when MM increases. Finally, as an important aspect of our work, we also establish how the non-Gaussianity of the mapping impacts the class of vectors that can be embedded or whose consistency width provably decays when MM increases.Comment: Keywords: quantization, restricted isometry property, compressed sensing, dimensionality reduction. 31 pages, 1 figur

    A new approach to Decimation in High Order Boltzmann Machines

    Get PDF
    La Màquina de Boltzmann (MB) és una xarxa neuronal estocàstica amb l'habilitat tant d'aprendre com d'extrapolar distribucions de probabilitat. Malgrat això, mai ha arribat a ser tant emprada com d'altres models de xarxa neuronal, com ara el perceptró, degut a la complexitat tan del procés de simulació com d'aprenentatge: les quantitats que es necessiten al llarg del procés d'aprenentatge són normalment estimades mitjançant tècniques Monte Carlo (MC), a través de l'algorisme del Temprat Simulat (SA). Això ha portat a una situació on la MB és més ben aviat considerada o bé com una extensió de la xarxa de Hopfield o bé com una implementació paral·lela del SA. Malgrat aquesta relativa manca d'èxit, la comunitat científica de l'àmbit de les xarxes neuronals ha mantingut un cert interès amb el model. Una de les extensions més rellevants a la MB és la Màquina de Boltzmann d'Alt Ordre (HOBM), on els pesos poden connectar més de dues neurones simultàniament. Encara que les capacitats d'aprenentatge d'aquest model han estat analitzades per d'altres autors, no s'ha pogut establir una equivalència formal entre els pesos d'una MB i els pesos d'alt ordre de la HOBM. En aquest treball s'analitza l'equivalència entre una MB i una HOBM a través de l'extensió del mètode conegut com a decimació. Decimació és una eina emprada a física estadística que es pot també aplicar a cert tipus de MB, obtenint expressions analítiques per a calcular les correlacions necessàries per a dur a terme el procés d'aprenentatge. Per tant, la decimació evita l'ús del costós algorisme del SA. Malgrat això, en la seva forma original, la decimació podia tan sols ser aplicada a cert tipus de topologies molt poc densament connectades. La extensió que es defineix en aquest treball permet calcular aquests valors independentment de la topologia de la xarxa neuronal; aquest model es basa en afegir prou pesos d'alt ordre a una MB estàndard com per a assegurar que les equacions de la decimació es poden solucionar. Després, s'estableix una equivalència directa entre els pesos d'un model d'alt ordre, la distribució de probabilitat que pot aprendre i les matrius de Hadamard: les propietats d'aquestes matrius es poden emprar per a calcular fàcilment els pesos del sistema. Finalment, es defineix una MB estàndard amb una topologia específica que permet entendre millor la equivalència exacta entre unitats ocultes de la MB i els pesos d'alt ordre de la HOBM.La Máquina de Boltzmann (MB) es una red neuronal estocástica con la habilidad de aprender y extrapolar distribuciones de probabilidad. Sin embargo, nunca ha llegado a ser tan popular como otros modelos de redes neuronals como, por ejemplo, el perceptrón. Esto es debido a la complejidad tanto del proceso de simulación como de aprendizaje: las cantidades que se necesitan a lo largo del proceso de aprendizaje se estiman mediante el uso de técnicas Monte Carlo (MC), a través del algoritmo del Temple Simulado (SA). En definitiva, la MB es generalmente considerada o bien una extensión de la red de Hopfield o bien como una implementación paralela del algoritmo del SA. Pese a esta relativa falta de éxito, la comunidad científica del ámbito de las redes neuronales ha mantenido un cierto interés en el modelo. Una importante extensión es la Màquina de Boltzmann de Alto Orden (HOBM), en la que los pesos pueden conectar más de dos neuronas a la vez. Pese a que este modelo ha sido analizado en profundidad por otros autores, todavía no se ha descrito una equivalencia formal entre los pesos de una MB i las conexiones de alto orden de una HOBM. En este trabajo se ha analizado la equivalencia entre una MB i una HOBM, a través de la extensión del método conocido como decimación. La decimación es una herramienta propia de la física estadística que también puede ser aplicada a ciertos modelos de MB, obteniendo expresiones analíticas para el cálculo de las cantidades necesarias en el algoritmo de aprendizaje. Por lo tanto, la decimación evita el alto coste computacional asociado al al uso del costoso algoritmo del SA. Pese a esto, en su forma original la decimación tan solo podía ser aplicada a ciertas topologías de MB, distinguidas por ser poco densamente conectadas. La extensión definida en este trabajo permite calcular estos valores independientemente de la topología de la red neuronal: este modelo se basa en añadir suficientes pesos de alto orden a una MB estándar como para asegurar que las ecuaciones de decimación pueden solucionarse. Más adelante, se establece una equivalencia directa entre los pesos de un modelo de alto orden, la distribución de probabilidad que puede aprender y las matrices tipo Hadamard. Las propiedades de este tipo de matrices se pueden usar para calcular fácilmente los pesos del sistema. Finalmente, se define una BM estándar con una topología específica que permite entender mejor la equivalencia exacta entre neuronas ocultas en la MB y los pesos de alto orden de la HOBM.The Boltzmann Machine (BM) is a stochastic neural network with the ability of both learning and extrapolating probability distributions. However, it has never been as widely used as other neural networks such as the perceptron, due to the complexity of both the learning and recalling algorithms, and to the high computational cost required in the learning process: the quantities that are needed at the learning stage are usually estimated by Monte Carlo (MC) through the Simulated Annealing (SA) algorithm. This has led to a situation where the BM is rather considered as an evolution of the Hopfield Neural Network or as a parallel implementation of the Simulated Annealing algorithm. Despite this relative lack of success, the neural network community has continued to progress in the analysis of the dynamics of the model. One remarkable extension is the High Order Boltzmann Machine (HOBM), where weights can connect more than two neurons at a time. Although the learning capabilities of this model have already been discussed by other authors, a formal equivalence between the weights in a standard BM and the high order weights in a HOBM has not yet been established. We analyze this latter equivalence between a second order BM and a HOBM by proposing an extension of the method known as decimation. Decimation is a common tool in statistical physics that may be applied to some kind of BMs, that can be used to obtain analytical expressions for the n-unit correlation elements required in the learning process. In this way, decimation avoids using the time consuming Simulated Annealing algorithm. However, as it was first conceived, it could only deal with sparsely connected neural networks. The extension that we define in this thesis allows computing the same quantities irrespective of the topology of the network. This method is based on adding enough high order weights to a standard BM to guarantee that the system can be solved. Next, we establish a direct equivalence between the weights of a HOBM model, the probability distribution to be learnt and Hadamard matrices. The properties of these matrices can be used to easily calculate the value of the weights of the system. Finally, we define a standard BM with a very specific topology that helps us better understand the exact equivalence between hidden units in a BM and high order weights in a HOBM
    corecore