104 research outputs found

    A SAT+CAS Approach to Finding Good Matrices: New Examples and Counterexamples

    Full text link
    We enumerate all circulant good matrices with odd orders divisible by 3 up to order 70. As a consequence of this we find a previously overlooked set of good matrices of order 27 and a new set of good matrices of order 57. We also find that circulant good matrices do not exist in the orders 51, 63, and 69, thereby finding three new counterexamples to the conjecture that such matrices exist in all odd orders. Additionally, we prove a new relationship between the entries of good matrices and exploit this relationship in our enumeration algorithm. Our method applies the SAT+CAS paradigm of combining computer algebra functionality with modern SAT solvers to efficiently search large spaces which are specified by both algebraic and logical constraints

    Group weighing matrices

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Making Maps Of The Cosmic Microwave Background: The MAXIMA Example

    Get PDF
    This work describes Cosmic Microwave Background (CMB) data analysis algorithms and their implementations, developed to produce a pixelized map of the sky and a corresponding pixel-pixel noise correlation matrix from time ordered data for a CMB mapping experiment. We discuss in turn algorithms for estimating noise properties from the time ordered data, techniques for manipulating the time ordered data, and a number of variants of the maximum likelihood map-making procedure. We pay particular attention to issues pertinent to real CMB data, and present ways of incorporating them within the framework of maximum likelihood map-making. Making a map of the sky is shown to be not only an intermediate step rendering an image of the sky, but also an important diagnostic stage, when tests for and/or removal of systematic effects can efficiently be performed. The case under study is the MAXIMA data set. However, the methods discussed are expected to be applicable to the analysis of other current and forthcoming CMB experiments.Comment: Replaced to match the published version, only minor change

    Biangular vectors

    Get PDF
    viii, 133 leaves ; 29 cmThis thesis introduces unit weighing matrices, a generalization of Hadamard matrices. When dealing with unit weighing matrices, a lot of the structure that is held by Hadamard matrices is lost, but this loss of rigidity allows these matrices to be used in the construction of certain combinatorial objects. We are able to fully classify these matrices for many small values by defining equivalence classes analogous to those found with Hadamard matrices. We then proceed to introduce an extension to mutually unbiased bases, called mutually unbiased weighing matrices, by allowing for different subsets of vectors to be orthogonal. The bounds on the size of these sets of matrices, both lower and upper, are examined. In many situations, we are able to show that these bounds are sharp. Finally, we show how these sets of matrices can be used to generate combinatorial objects such as strongly regular graphs and association schemes

    Amicable T-matrices and applications

    Get PDF
    iii, 49 leaves ; 29 cmOur main aim in this thesis is to produce new T-matrices from the set of existing T-matrices. In Theorem 4.3 a multiplication method is introduced to generate new T-matrices of order st, provided that there are some specially structured T-matrices of orders s and t. A class of properly amicable and double disjoint T-matrices are introduced. A number of properly amicable T-matrices are constructed which includes 2, 3, 5, 6, 7, 9, 10, 11, 13, 14, 18, 22. To keep the new matrices disjoint an extra condition is imposed on one set of T-matrices and named double disjoint T-matrices. It is shown that there are some T-matrices that are both double disjoint and properly amicable. Using these matrices an infinite family of new T-matrices are constructed. We then turn our attention to the application of T-matrices to construct orthogonal designs and complex Hadamard matrices. Using T-matrices some orthogonal designs constructed from 16 circulant matrices are constructed. It is known that having T-matrices of order t and orthogonal designs constructible from 16 circulant matrices lead to an infinite family of orthogonal designs. Using amicable T-matrices some complex Hadamard matrices are shown to exist

    Amicable matrices and orthogonal designs

    Get PDF
    This thesis is mainly concerned with the orthogonal designs of Baumert-Hall array type, OD(4n;n,n,n,n) where n=2k, k is odd integer. For every odd prime power p^r, we construct an infinite class of amicable T-matrices of order n=p^r+1 in association with negacirculant weighing matrices W(n,n-1). In particular, for p^r≡1 (mod 4) we construct amicable T-matrices of order n≡2 (mod 4) and application of these matrices allows us to generate infinite class of orthogonal designs of type OD(4n;n,n,n,n) and OD(4n;n,n,n-2,n-2) where n=2k; k is odd integer. For a special class of T-matrices of order n where each of T_i is a weighing matrix of weight w_i;1 ≤i≤4 and Williamson-type matrices of order m, we establish a theorem which produces four circulant matrices in terms of four variables. These matrices are additive and can be used to generate a new class of orthogonal design of type OD(4mn;w_1s,w_2s,w_3s,w_4s ); where s=4m. In addition to this, we present some methods to find amicable matrices of odd order in terms of variables which have an interesting application to generate some new orthogonal designs as well as generalized orthogonal designs.University of Lethbridge, NSER

    Efficient machine learning: models and accelerations

    Get PDF
    One of the key enablers of the recent unprecedented success of machine learning is the adoption of very large models. Modern machine learning models typically consist of multiple cascaded layers such as deep neural networks, and at least millions to hundreds of millions of parameters (i.e., weights) for the entire model. The larger-scale model tend to enable the extraction of more complex high-level features, and therefore, lead to a significant improvement of the overall accuracy. On the other side, the layered deep structure and large model sizes also demand to increase computational capability and memory requirements. In order to achieve higher scalability, performance, and energy efficiency for deep learning systems, two orthogonal research and development trends have attracted enormous interests. The first trend is the acceleration while the second is the model compression. The underlying goal of these two trends is the high quality of the models to provides accurate predictions. In this thesis, we address these two problems and utilize different computing paradigms to solve real-life deep learning problems. To explore in these two domains, this thesis first presents the cogent confabulation network for sentence completion problem. We use Chinese language as a case study to describe our exploration of the cogent confabulation based text recognition models. The exploration and optimization of the cogent confabulation based models have been conducted through various comparisons. The optimized network offered a better accuracy performance for the sentence completion. To accelerate the sentence completion problem in a multi-processing system, we propose a parallel framework for the confabulation recall algorithm. The parallel implementation reduce runtime, improve the recall accuracy by breaking the fixed evaluation order and introducing more generalization, and maintain a balanced progress in status update among all neurons. A lexicon scheduling algorithm is presented to further improve the model performance. As deep neural networks have been proven effective to solve many real-life applications, and they are deployed on low-power devices, we then investigated the acceleration for the neural network inference using a hardware-friendly computing paradigm, stochastic computing. It is an approximate computing paradigm which requires small hardware footprint and achieves high energy efficiency. Applying this stochastic computing to deep convolutional neural networks, we design the functional hardware blocks and optimize them jointly to minimize the accuracy loss due to the approximation. The synthesis results show that the proposed design achieves the remarkable low hardware cost and power/energy consumption. Modern neural networks usually imply a huge amount of parameters which cannot be fit into embedded devices. Compression of the deep learning models together with acceleration attracts our attention. We introduce the structured matrices based neural network to address this problem. Circulant matrix is one of the structured matrices, where a matrix can be represented using a single vector, so that the matrix is compressed. We further investigate a more flexible structure based on circulant matrix, called block-circulant matrix. It partitions a matrix into several smaller blocks and makes each submatrix is circulant. The compression ratio is controllable. With the help of Fourier Transform based equivalent computation, the inference of the deep neural network can be accelerated energy efficiently on the FPGAs. We also offer the optimization for the training algorithm for block circulant matrices based neural networks to obtain a high accuracy after compression

    The Explicit Identities for Spectral Norms of Circulant-Type Matrices Involving Binomial Coefficients and Harmonic Numbers

    Get PDF
    The explicit formulae of spectral norms for circulant-type matrices are investigated; the matrices are circulant matrix, skew-circulant matrix, and g-circulant matrix, respectively. The entries are products of binomial coefficients with harmonic numbers. Explicit identities for these spectral norms are obtained. Employing these approaches, some numerical tests are listed to verify the results
    corecore