16 research outputs found

    SudoQ -- a quantum variant of the popular game

    Full text link
    We introduce SudoQ, a quantum version of the classical game Sudoku. Allowing the entries of the grid to be (non-commutative) projections instead of integers, the solution set of SudoQ puzzles can be much larger than in the classical (commutative) setting. We introduce and analyze a randomized algorithm for computing solutions of SudoQ puzzles. Finally, we state two important conjectures relating the quantum and the classical solutions of SudoQ puzzles, corroborated by analytical and numerical evidence.Comment: Python code and examples available at https://github.com/inechita/Sudo

    Application of Message Passing and Sinkhorn Balancing Algorithms for Probabilistic Graphical Models

    Get PDF
    Probabilistic Graphical Models (PGMs) allow us to map real world scenarios to adeclarative representation and use it as a basis for predictive analysis. It is a framework thatallows us to express complex probability distributions in a simple way. PGMs can be applied to avariety of scenarios wherein a model is built to reflect the conditional dependencies betweenrandom variables and then used to simulate the interactions between them to draw conclusions.The framework further provides many algorithms to analyze these models and extractinformation.One of the applications of PGMs is in solving mathematical puzzles such as Sudoku.Sudoku is a popular number puzzle that involves filling in empty cells in an ‘N x N’ grid in sucha way that numbers 1 to N appear only once in each row, column and ‘N 1/2 x N 1/2 ’ sub-grid. Wecan model this problem as a PGM and represent it in the form of a bipartite graph. The mainconcepts we employ to obtain an algorithm to solve Sudoku puzzles are factor graphs andmessage passing algorithms. In this project we attempt to modify the sum-product messagepassing algorithm to solve the puzzle. Additionally, we implement a solution using Sinkhornbalancing to overcome the impact of loopy propagation and compare its performance with theformer

    Density evolution for SUDOKU codes on the erasure channel

    Get PDF
    Codes based on SUDOKU puzzles are discussed, and belief propagation decoding introduced for the erasure channel. Despite the non-linearity of the code constraints, it is argued that density evolution can be used to analyse code performance due to the invariance of the code under alphabet permutation. The belief propagation decoder for erasure channels operates by exchanging messages containing sets of possible values. Accordingly, density evolution tracks the probability mass functions of the set cardinalities. The equations governing the mapping of those probability mass functions are derived and calculated for variable and constraint nodes, and decoding thresholds are computed for long SUDOKU codes with random interleavers.Funded in part by the European Research Council under ERC grant agreement 259663 and by the FP7 Network of Excellence NEWCOM# under grant agreement 318306.This is the accepted manuscript. The final version is available from IEEE at http://dx.doi.org/10.1109/ISTC.2014.6955120

    Recurrent Relational Networks

    Full text link
    This paper is concerned with learning to solve tasks that require a chain of interdependent steps of relational inference, like answering complex questions about the relationships between objects, or solving puzzles where the smaller elements of a solution mutually constrain each other. We introduce the recurrent relational network, a general purpose module that operates on a graph representation of objects. As a generalization of Santoro et al. [2017]'s relational network, it can augment any neural network model with the capacity to do many-step relational reasoning. We achieve state of the art results on the bAbI textual question-answering dataset with the recurrent relational network, consistently solving 20/20 tasks. As bAbI is not particularly challenging from a relational reasoning point of view, we introduce Pretty-CLEVR, a new diagnostic dataset for relational reasoning. In the Pretty-CLEVR set-up, we can vary the question to control for the number of relational reasoning steps that are required to obtain the answer. Using Pretty-CLEVR, we probe the limitations of multi-layer perceptrons, relational and recurrent relational networks. Finally, we show how recurrent relational networks can learn to solve Sudoku puzzles from supervised training data, a challenging task requiring upwards of 64 steps of relational reasoning. We achieve state-of-the-art results amongst comparable methods by solving 96.6% of the hardest Sudoku puzzles.Comment: Accepted at NIPS 201

    Quantum Permutation Matrices

    Get PDF
    Quantum permutations arise in many aspects of modern “quantum mathematics”. However, the aim of this article is to detach these objects from their context and to give a friendly introduction purely within operator theory. We define quantum permutation matrices as matrices whose entries are operators on Hilbert spaces; they obey certain assumptions generalizing classical permutation matrices. We give a number of examples and we list many open problems. We then put them back in their original context and give an overview of their use in several branches of mathematics, such as quantum groups, quantum information theory, graph theory and free probability theory

    Testing of random matrices

    Get PDF
    Let nn be a positive integer and X=[xij]1≀i,j≀nX = [x_{ij}]_{1 \leq i, j \leq n} be an n×nn \times n\linebreak \noindent sized matrix of independent random variables having joint uniform distribution \hbox{Pr} {x_{ij} = k \hbox{for} 1 \leq k \leq n} = \frac{1}{n} \quad (1 \leq i, j \leq n) \koz. A realization M=[mij]\mathcal{M} = [m_{ij}] of XX is called \textit{good}, if its each row and each column contains a permutation of the numbers 1,2,...,n1, 2,..., n. We present and analyse four typical algorithms which decide whether a given realization is good
    corecore