121 research outputs found

    Une odyssée de la communication classique au calcul quantique tolérant aux fautes

    Get PDF
    Cette thèse traite principalement de la protection de l'information. Non pas au sens de protection des renseignements privés dont on entend souvent parler dans les médias, mais plutôt au sens de robustesse à la corruption des données. En effet, lorsque nous utilisons un cellulaire pour envoyer un texto, plusieurs facteurs, comme les particules atmosphériques et l'interférence avec d'autres signaux, peuvent modifier le message initial. Si nous ne faisons rien pour protéger le signal, il est peu probable que le contenu du texto reste inchangé lors de la réception. C'est ce problème qui a motivé le premier projet de recherche de cette thèse. Sous la supervision du professeur David Poulin, j'ai étudié une généralisation des codes polaires, une technologie au coeur du protocole de télécommunication de 5\textsuperscript{ième} génération (5G). Pour cela, j'ai utilisé les réseaux de tenseurs, outils mathématiques initialement développés pour étudier les matériaux quantiques. L'avantage de cette approche est qu'elle permet une représentation graphique intuitive du problème, ce qui facilite grandement le développement des algorithmes. À la suite de cela, j'ai étudié l'impact de deux paramètres clés sur la performance des codes polaires convolutifs. En considérant le temps d'exécution des protocoles, j'ai identifié les valeurs de paramètres qui permettent de mieux protéger l'information à un coût raisonnable. Ce résultat permet ainsi de mieux comprendre comment améliorer les performances des codes polaires, ce qui a un grand potentiel d'application en raison de l'importance de ces derniers. Cette idée d'utiliser des outils mathématiques graphiques pour étudier des problèmes de protection de l'information sera le fil conducteur dans le reste de la thèse. Cependant, pour la suite, les erreurs n'affecteront plus des systèmes de communications classiques, mais plutôt des systèmes de calcul quantique. Comme je le présenterai dans cette thèse, les systèmes quantiques sont naturellement beaucoup plus sensibles aux erreurs. À cet égard, j'ai effectué un stage au sein de l'équipe de Microsoft Research, principalement sous la supervision de Michael Beverland, au cours duquel j'ai conçu des circuits permettant de mesurer un système quantique afin d'identifier les potentielles fautes qui affectent celui-ci. Avec le reste de l'équipe, nous avons prouvé mathématiquement que les circuits que j'ai développés sont optimaux. Ensuite, j'ai proposé une architecture pour implémenter ces circuits de façon plus réaliste en laboratoire et les simulations numériques que j'ai effectuées ont démontré des résultats prometteurs pour cette approche. D'ailleurs, ce résultat a été accueilli avec grand intérêt par la communauté scientifique et a été publié dans la prestigieuse revue \textit{Physical Review Letters}. Pour complémenter ce travail, j'ai collaboré avec l'équipe de Microsoft pour démontrer analytiquement que les architectures actuelles d'ordinateurs quantiques reposant sur des connexions locales entre les qubits ne suffiront pas pour la réalisation d'ordinateurs de grandes tailles protégés des erreurs. L'ensemble de ces résultats sont inspirés de méthodes issues de la théorie des graphes et plus particulièrement des méthodes de représentation des graphes dans un espace en deux dimensions. L'utilisation de telles méthodes pour le design de circuits et d'architectures quantiques est également une approche novatrice. J'ai terminé ma thèse sous la supervision du professeur Stefanos Kourtis. Avec celui-ci, j'ai créé une méthode, fondée sur la théorie des graphes et des méthodes d'informatique théorique, qui permet de concevoir automatiquement de nouveaux protocoles de correction des erreurs dans un système quantique. La méthode que j'ai conçue repose sur la résolution d'un problème de satisfaction de contraintes. Ce type de problème est généralement très difficile à résoudre. Cependant, il existe pour ces derniers un paramètre critique. En variant ce paramètre, le système passe d'une phase où les instances sont facilement résolubles vers une phase où il est facile de montrer qu'il n'y pas de solution. Les problèmes difficiles sont alors concentrés autour de cette transition. À l'aide d'expériences numériques, j'ai montré que la méthode proposée a un comportement similaire. Cela permet de montrer qu'il existe un régime où il est beaucoup plus facile que ce que le croyait la communauté de concevoir des protocoles de corrections des erreurs quantiques. De plus, en autant que je sache,l'article qui a résulté de ce travail est le premier qui met de l'avant ce lien entre la construction de protocoles de corrections des erreurs, les problèmes de satisfaction de contraintes et les transitions de phases

    Cellular, Wide-Area, and Non-Terrestrial IoT: A Survey on 5G Advances and the Road Towards 6G

    Full text link
    The next wave of wireless technologies is proliferating in connecting things among themselves as well as to humans. In the era of the Internet of things (IoT), billions of sensors, machines, vehicles, drones, and robots will be connected, making the world around us smarter. The IoT will encompass devices that must wirelessly communicate a diverse set of data gathered from the environment for myriad new applications. The ultimate goal is to extract insights from this data and develop solutions that improve quality of life and generate new revenue. Providing large-scale, long-lasting, reliable, and near real-time connectivity is the major challenge in enabling a smart connected world. This paper provides a comprehensive survey on existing and emerging communication solutions for serving IoT applications in the context of cellular, wide-area, as well as non-terrestrial networks. Specifically, wireless technology enhancements for providing IoT access in fifth-generation (5G) and beyond cellular networks, and communication networks over the unlicensed spectrum are presented. Aligned with the main key performance indicators of 5G and beyond 5G networks, we investigate solutions and standards that enable energy efficiency, reliability, low latency, and scalability (connection density) of current and future IoT networks. The solutions include grant-free access and channel coding for short-packet communications, non-orthogonal multiple access, and on-device intelligence. Further, a vision of new paradigm shifts in communication networks in the 2030s is provided, and the integration of the associated new technologies like artificial intelligence, non-terrestrial networks, and new spectra is elaborated. Finally, future research directions toward beyond 5G IoT networks are pointed out.Comment: Submitted for review to IEEE CS&

    From Polar to Reed-Muller Codes:Unified Scaling, Non-standard Channels, and a Proven Conjecture

    Get PDF
    The year 2016, in which I am writing these words, marks the centenary of Claude Shannon, the father of information theory. In his landmark 1948 paper "A Mathematical Theory of Communication", Shannon established the largest rate at which reliable communication is possible, and he referred to it as the channel capacity. Since then, researchers have focused on the design of practical coding schemes that could approach such a limit. The road to channel capacity has been almost 70 years long and, after many ideas, occasional detours, and some rediscoveries, it has culminated in the description of low-complexity and provably capacity-achieving coding schemes, namely, polar codes and iterative codes based on sparse graphs. However, next-generation communication systems require an unprecedented performance improvement and the number of transmission settings relevant in applications is rapidly increasing. Hence, although Shannon's limit seems finally close at hand, new challenges are just around the corner. In this thesis, we trace a road that goes from polar to Reed-Muller codes and, by doing so, we investigate three main topics: unified scaling, non-standard channels, and capacity via symmetry. First, we consider unified scaling. A coding scheme is capacity-achieving when, for any rate smaller than capacity, the error probability tends to 0 as the block length becomes increasingly larger. However, the practitioner is often interested in more specific questions such as, "How much do we need to increase the block length in order to halve the gap between rate and capacity?". We focus our analysis on polar codes and develop a unified framework to rigorously analyze the scaling of the main parameters, i.e., block length, rate, error probability, and channel quality. Furthermore, in light of the recent success of a list decoding algorithm for polar codes, we provide scaling results on the performance of list decoders. Next, we deal with non-standard channels. When we say that a coding scheme achieves capacity, we typically consider binary memoryless symmetric channels. However, practical transmission scenarios often involve more complicated settings. For example, the downlink of a cellular system is modeled as a broadcast channel, and the communication on fiber links is inherently asymmetric. We propose provably optimal low-complexity solutions for these settings. In particular, we present a polar coding scheme that achieves the best known rate region for the broadcast channel, and we describe three paradigms to achieve the capacity of asymmetric channels. To do so, we develop general coding "primitives", such as the chaining construction that has already proved to be useful in a variety of communication problems. Finally, we show how to achieve capacity via symmetry. In the early days of coding theory, a popular paradigm consisted in exploiting the structure of algebraic codes to devise practical decoding algorithms. However, proving the optimality of such coding schemes remained an elusive goal. In particular, the conjecture that Reed-Muller codes achieve capacity dates back to the 1960s. We solve this open problem by showing that Reed-Muller codes and, in general, codes with sufficient symmetry are capacity-achieving over erasure channels under optimal MAP decoding. As the proof does not rely on the precise structure of the codes, we are able to show that symmetry alone guarantees optimal performance

    Applications of Coding Theory to Massive Multiple Access and Big Data Problems

    Get PDF
    The broad theme of this dissertation is design of schemes that admit iterative algorithms with low computational complexity to some new problems arising in massive multiple access and big data. Although bipartite Tanner graphs and low-complexity iterative algorithms such as peeling and message passing decoders are very popular in the channel coding literature they are not as widely used in the respective areas of study and this dissertation serves as an important step in that direction to bridge that gap. The contributions of this dissertation can be categorized into the following three parts. In the first part of this dissertation, a timely and interesting multiple access problem for a massive number of uncoordinated devices is considered wherein the base station is interested only in recovering the list of messages without regard to the identity of the respective sources. A coding scheme with polynomial encoding and decoding complexities is proposed for this problem, the two main features of which are (i) design of a close-to-optimal coding scheme for the T-user Gaussian multiple access channel and (ii) successive interference cancellation decoder. The proposed coding scheme not only improves on the performance of the previously best known coding scheme by ≈ 13 dB but is only ≈ 6 dB away from the random Gaussian coding information rate. In the second part construction-D lattices are constructed where the underlying linear codes are nested binary spatially-coupled low-density parity-check codes (SCLDPC) codes with uniform left and right degrees. It is shown that the proposed lattices achieve the Poltyrev limit under multistage belief propagation decoding. Leveraging this result lattice codes constructed from these lattices are applied to the three user symmetric interference channel. For channel gains within 0.39 dB from the very strong interference regime, the proposed lattice coding scheme with the iterative belief propagation decoder, for target error rates of ≈ 10^-5, is only 2:6 dB away the Shannon limit. The third part focuses on support recovery in compressed sensing and the nonadaptive group testing (GT) problems. Prior to this work, sensing schemes based on left-regular sparse bipartite graphs and iterative recovery algorithms based on peeling decoder were proposed for the above problems. These schemes require O(K logN) and Ω(K logK logN) measurements respectively to recover the sparse signal with high probability (w.h.p), where N, K denote the dimension and sparsity of the signal respectively (K (double backward arrow) N). Also the number of measurements required to recover at least (1 - €) fraction of defective items w.h.p (approximate GT) is shown to be cv€_K logN/K. In this dissertation, instead of the left-regular bipartite graphs, left-and- right regular bipartite graph based sensing schemes are analyzed. It is shown that this design strategy enables to achieve superior and sharper results. For the support recovery problem, the number of measurements is reduced to the optimal lower bound of Ω (K log N/K). Similarly for the approximate GT, proposed scheme only requires c€_K log N/ K measurements. For the probabilistic GT, proposed scheme requires (K logK log vN/ K) measurements which is only log K factor away from the best known lower bound of Ω (K log N/ K). Apart from the asymptotic regime, the proposed schemes also demonstrate significant improvement in the required number of measurements for finite values of K, N

    Applications of Coding Theory to Massive Multiple Access and Big Data Problems

    Get PDF
    The broad theme of this dissertation is design of schemes that admit iterative algorithms with low computational complexity to some new problems arising in massive multiple access and big data. Although bipartite Tanner graphs and low-complexity iterative algorithms such as peeling and message passing decoders are very popular in the channel coding literature they are not as widely used in the respective areas of study and this dissertation serves as an important step in that direction to bridge that gap. The contributions of this dissertation can be categorized into the following three parts. In the first part of this dissertation, a timely and interesting multiple access problem for a massive number of uncoordinated devices is considered wherein the base station is interested only in recovering the list of messages without regard to the identity of the respective sources. A coding scheme with polynomial encoding and decoding complexities is proposed for this problem, the two main features of which are (i) design of a close-to-optimal coding scheme for the T-user Gaussian multiple access channel and (ii) successive interference cancellation decoder. The proposed coding scheme not only improves on the performance of the previously best known coding scheme by ≈ 13 dB but is only ≈ 6 dB away from the random Gaussian coding information rate. In the second part construction-D lattices are constructed where the underlying linear codes are nested binary spatially-coupled low-density parity-check codes (SCLDPC) codes with uniform left and right degrees. It is shown that the proposed lattices achieve the Poltyrev limit under multistage belief propagation decoding. Leveraging this result lattice codes constructed from these lattices are applied to the three user symmetric interference channel. For channel gains within 0.39 dB from the very strong interference regime, the proposed lattice coding scheme with the iterative belief propagation decoder, for target error rates of ≈ 10^-5, is only 2:6 dB away the Shannon limit. The third part focuses on support recovery in compressed sensing and the nonadaptive group testing (GT) problems. Prior to this work, sensing schemes based on left-regular sparse bipartite graphs and iterative recovery algorithms based on peeling decoder were proposed for the above problems. These schemes require O(K logN) and Ω(K logK logN) measurements respectively to recover the sparse signal with high probability (w.h.p), where N, K denote the dimension and sparsity of the signal respectively (K (double backward arrow) N). Also the number of measurements required to recover at least (1 - €) fraction of defective items w.h.p (approximate GT) is shown to be cv€_K logN/K. In this dissertation, instead of the left-regular bipartite graphs, left-and- right regular bipartite graph based sensing schemes are analyzed. It is shown that this design strategy enables to achieve superior and sharper results. For the support recovery problem, the number of measurements is reduced to the optimal lower bound of Ω (K log N/K). Similarly for the approximate GT, proposed scheme only requires c€_K log N/ K measurements. For the probabilistic GT, proposed scheme requires (K logK log vN/ K) measurements which is only log K factor away from the best known lower bound of Ω (K log N/ K). Apart from the asymptotic regime, the proposed schemes also demonstrate significant improvement in the required number of measurements for finite values of K, N
    • …
    corecore