1,059,975 research outputs found
Centers, cocenters and simple quantum groups
We define the notion of a (linearly reductive) center for a linearly
reductive quantum group, and show that the quotient of a such a quantum group
by its center is simple whenever its fusion semiring is free in the sense of
Banica and Vergnioux. We also prove that the same is true of free products of
quantum groups under very mild non-degeneracy conditions. Several natural
families of compact quantum groups, some with non-commutative fusion semirings
and hence very "far from classical", are thus seen to be simple. Examples
include quotients of free unitary groups by their centers, recovering previous
work, as well as quotients of quantum reflection groups by their centers.Comment: 17 pages + references; TikZ diagrams; changed numbering; fixed small
error in the proof of Theorem 3.1; other small modifications after referee
comment
Automated searching for quantum subsystem codes
Quantum error correction allows for faulty quantum systems to behave in an
effectively error free manner. One important class of techniques for quantum
error correction is the class of quantum subsystem codes, which are relevant
both to active quantum error correcting schemes as well as to the design of
self-correcting quantum memories. Previous approaches for investigating these
codes have focused on applying theoretical analysis to look for interesting
codes and to investigate their properties. In this paper we present an
alternative approach that uses computational analysis to accomplish the same
goals. Specifically, we present an algorithm that computes the optimal quantum
subsystem code that can be implemented given an arbitrary set of measurement
operators that are tensor products of Pauli operators. We then demonstrate the
utility of this algorithm by performing a systematic investigation of the
quantum subsystem codes that exist in the setting where the interactions are
limited to 2-body interactions between neighbors on lattices derived from the
convex uniform tilings of the plane.Comment: 38 pages, 15 figure, 10 tables. The algorithm described in this paper
is available as both library and a command line program (including full
source code) that can be downloaded from
http://github.com/gcross/CodeQuest/downloads. The source code used to apply
the algorithm to scan the lattices is available upon request. Please feel
free to contact the authors with question
Recommended from our members
Review and assessment of latent and sensible heat flux accuracy over the global oceans
For over a decade, several research groups have been developing air-sea heat flux information over the global ocean, including latent (LHF) and sensible (SHF) heat fluxes over the global ocean. This paper aims to provide new insight into the quality and error characteristics of turbulent heat flux estimates at various spatial and temporal scales (from daily upwards). The study is performed within the European Space Agency (ESA) Ocean Heat Flux (OHF) project. One of the main objectives of the OHF project is to meet the recommendations and requirements expressed by various international programs such as the World Research Climate Program (WCRP) and Climate and Ocean Variability, Predictability, and Change (CLIVAR), recognizing the need for better characterization of existing flux errors with respect to the input bulk variables (e.g. surface wind, air and sea surface temperatures, air and surface specific humidities), and to the atmospheric and oceanic conditions (e.g. wind conditions and sea state). The analysis is based on the use of daily averaged LHF and SHF and the asso- ciated bulk variables derived from major satellite-based and atmospheric reanalysis products. Inter-comparisons of heat flux products indicate that all of them exhibit similar space and time patterns. However, they also reveal significant differences in magnitude in some specific regions such as the western ocean boundaries during the Northern Hemisphere winter season, and the high southern latitudes. The differences tend to be closely related to large differences in surface wind speed and/or specific air humidity (for LHF) and to air and sea temperature differences (for SHF). Further quality investigations are performed through comprehensive comparisons with daily-averaged LHF and SHF estimated from moorings. The resulting statistics are used to assess the error of each OHF product. Consideration of error correlation between products and observations (e.g., by their assimilation) is also given. This reveals generally high noise variance in all products and a weak signal in common with in situ observations, with some products only slightly better than others. The OHF LHF and SHF products, and their associated error characteristics, are used to compute daily OHF multiproduct-ensemble (OHF/MPE) estimates of LHF and SHF over the ice-free global ocean on a 0.25° × 0.25° grid. The accuracy of this heat multiproduct, determined from comparisons with mooring data, is greater than for any individual product. It is used as a reference for the anomaly characterization of each individual OHF product
A Unified Coded Deep Neural Network Training Strategy Based on Generalized PolyDot Codes for Matrix Multiplication
This paper has two contributions. First, we propose a novel coded matrix
multiplication technique called Generalized PolyDot codes that advances on
existing methods for coded matrix multiplication under storage and
communication constraints. This technique uses "garbage alignment," i.e.,
aligning computations in coded computing that are not a part of the desired
output. Generalized PolyDot codes bridge between Polynomial codes and MatDot
codes, trading off between recovery threshold and communication costs. Second,
we demonstrate that Generalized PolyDot can be used for training large Deep
Neural Networks (DNNs) on unreliable nodes prone to soft-errors. This requires
us to address three additional challenges: (i) prohibitively large overhead of
coding the weight matrices in each layer of the DNN at each iteration; (ii)
nonlinear operations during training, which are incompatible with linear
coding; and (iii) not assuming presence of an error-free master node, requiring
us to architect a fully decentralized implementation without any "single point
of failure." We allow all primary DNN training steps, namely, matrix
multiplication, nonlinear activation, Hadamard product, and update steps as
well as the encoding/decoding to be error-prone. We consider the case of
mini-batch size , as well as , leveraging coded matrix-vector
products, and matrix-matrix products respectively. The problem of DNN training
under soft-errors also motivates an interesting, probabilistic error model
under which a real number MDS code is shown to correct errors
with probability as compared to for the
more conventional, adversarial error model. We also demonstrate that our
proposed strategy can provide unbounded gains in error tolerance over a
competing replication strategy and a preliminary MDS-code-based strategy for
both these error models.Comment: Presented in part at the IEEE International Symposium on Information
Theory 2018 (Submission Date: Jan 12 2018); Currently under review at the
IEEE Transactions on Information Theor
Internet Filters: A Public Policy Report (Second edition; fully revised and updated)
No sooner was the Internet upon us than anxiety arose over the ease of accessing pornography and other controversial content. In response, entrepreneurs soon developed filtering products. By the end of the decade, a new industry had emerged to create and market Internet filters....Yet filters were highly imprecise from the beginning. The sheer size of the Internet meant that identifying potentially offensive content had to be done mechanically, by matching "key" words and phrases; hence, the blocking of Web sites for "Middlesex County," or words such as "magna cum laude". Internet filters are crude and error-prone because they categorize expression without regard to its context, meaning, and value. Yet these sweeping censorship tools are now widely used in companies, homes, schools, and libraries. Internet filters remain a pressing public policy issue to all those concerned about free expression, education, culture, and democracy. This fully revised and updated report surveys tests and studies of Internet filtering products from the mid-1990s through 2006. It provides an essential resource for the ongoing debate
Three-transmit-antenna space-time codes based on SU(3)
Fully diverse constellations, i.e., a set of unitary matrices whose pairwise differences are nonsingular, are useful in multiantenna communications especially in multiantenna differential modulation, since they have good pairwise error properties. Recently, group theoretic ideas, especially fixed-point-free (fpf) groups, have been used to design fully diverse constellations of unitary matrices. Here, we give systematic design methods of space-time codes which are appropriate for three-transmit-antenna differential modulation. The structures of the codes are motivated by the special unitary Lie group SU(3). One of the codes, which is called the AB code, has a fast maximum-likelihood (ML) decoding algorithm using complex sphere decoding. Diversity products of the codes can be easily calculated, and simulated performance shows that they are better than group-based codes, especially at high rates and as good as the elaborately designed nongroup code
- …