18,527 research outputs found
Syndrome-Coupled Rate-Compatible Error-Correcting Codes
Rate-compatible error-correcting codes (ECCs), which consist of a set of
extended codes, are of practical interest in both wireless communications and
data storage. In this work, we first study the lower bounds for rate-compatible
ECCs, thus proving the existence of good rate-compatible codes. Then, we
propose a general framework for constructing rate-compatible ECCs based on
cosets and syndromes of a set of nested linear codes. We evaluate our
construction from two points of view. From a combinatorial perspective, we show
that we can construct rate-compatible codes with increasing minimum distances.
From a probabilistic point of view, we prove that we are able to construct
capacity-achieving rate-compatible codes.Comment: Submitted to ITW 201
Solving for multi-class using orthogonal coding matrices
A common method of generalizing binary to multi-class classification is the
error correcting code (ECC). ECCs may be optimized in a number of ways, for
instance by making them orthogonal. Here we test two types of orthogonal ECCs
on seven different datasets using three types of binary classifier and compare
them with three other multi-class methods: 1 vs. 1, one-versus-the-rest and
random ECCs. The first type of orthogonal ECC, in which the codes contain no
zeros, admits a fast and simple method of solving for the probabilities.
Orthogonal ECCs are always more accurate than random ECCs as predicted by
recent literature. Improvments in uncertainty coefficient (U.C.) range between
0.4--17.5% (0.004--0.139, absolute), while improvements in Brier score between
0.7--10.7%. Unfortunately, orthogonal ECCs are rarely more accurate than 1 vs.
1. Disparities are worst when the methods are paired with logistic regression,
with orthogonal ECCs never beating 1 vs. 1. When the methods are paired with
SVM, the losses are less significant, peaking at 1.5%, relative, 0.011 absolute
in uncertainty coefficient and 6.5% in Brier scores. Orthogonal ECCs are always
the fastest of the five multi-class methods when paired with linear
classifiers. When paired with a piecewise linear classifier, whose
classification speed does not depend on the number of training samples,
classifications using orthogonal ECCs were always more accurate than the the
remaining three methods and also faster than 1 vs. 1. Losses against 1 vs. 1
here were higher, peaking at 1.9% (0.017, absolute), in U.C. and 39% in Brier
score. Gains in speed ranged between 1.1% and over 100%. Whether the speed
increase is worth the penalty in accuracy will depend on the application
Synchronization Strings: Codes for Insertions and Deletions Approaching the Singleton Bound
We introduce synchronization strings as a novel way of efficiently dealing
with synchronization errors, i.e., insertions and deletions. Synchronization
errors are strictly more general and much harder to deal with than commonly
considered half-errors, i.e., symbol corruptions and erasures. For every
, synchronization strings allow to index a sequence with an
size alphabet such that one can efficiently transform
synchronization errors into half-errors. This powerful new
technique has many applications. In this paper, we focus on designing insdel
codes, i.e., error correcting block codes (ECCs) for insertion deletion
channels.
While ECCs for both half-errors and synchronization errors have been
intensely studied, the later has largely resisted progress. Indeed, it took
until 1999 for the first insdel codes with constant rate, constant distance,
and constant alphabet size to be constructed by Schulman and Zuckerman. Insdel
codes for asymptotically large or small noise rates were given in 2016 by
Guruswami et al. but these codes are still polynomially far from the optimal
rate-distance tradeoff. This makes the understanding of insdel codes up to this
work equivalent to what was known for regular ECCs after Forney introduced
concatenated codes in his doctoral thesis 50 years ago.
A direct application of our synchronization strings based indexing method
gives a simple black-box construction which transforms any ECC into an equally
efficient insdel code with a slightly larger alphabet size. This instantly
transfers much of the highly developed understanding for regular ECCs over
large constant alphabets into the realm of insdel codes. Most notably, we
obtain efficient insdel codes which get arbitrarily close to the optimal
rate-distance tradeoff given by the Singleton bound for the complete noise
spectrum
Quantum simulation of discrete-time Hamiltonians using directionally unbiased linear optical multiports
Recently, a generalization of the standard optical multiport was proposed [Phys. Rev. A 93, 043845 (2016)]. These directionally unbiased multiports allow photons to reverse direction and exit backwards from the input port, providing a realistic linear optical scattering vertex for quantum walks on arbitrary graph structures. Here, it is shown that arrays of these multiports allow the simulation of a range of discrete-time Hamiltonian systems. Examples are described, including a case where both spatial and internal degrees of freedom are simulated. Because input ports also double as output ports, there is substantial savings of resources compared to feed-forward networks carrying out the same functions. The simulation is implemented in a scalable manner using only linear optics, and can be generalized to higher dimensional systems in a straightforward fashion, thus offering a concrete experimentally achievable implementation of graphical models of discrete-time quantum systems.This research was supported by the National Science Foundation EFRI-ACQUIRE Grant No. ECCS-1640968, NSF Grant No. ECCS-1309209, and by the Northrop Grumman NG Next. (ECCS-1640968 - National Science Foundation EFRI-ACQUIRE Grant; ECCS-1309209 - NSF Grant; Northrop Grumman NG Next
Numerical study of imperfect liquid-filled conical shells - Verifying the present design rule
Recommended from our members
Complex systems science: expert consultation report
Executive SummaryA new programme of research in Complex Systems Science must be initiated by FETThe science of complex systems (CS) is essential to establish rigorous scientific principles on which to develop the future ICT systems that are critical to the well-being, safety and prosperity of Europe and its citizens. As the “ICT incubator and pathfinder for new ideas and themes for long-term research in the area of information and communication technologies” FET must initiate a significant new programme of research in complex systems science to underpin research and development in ICT. Complex Systems Science is a “blue sky” research laboratory for R&D in ICT and their applications. In July 2009, ASSYST was given a set of probing questions concerning FET funding for ICT-related complex systems research. This document is based on the CS community’s response.Complex systems research has made considerable progress and is delivering new scienceSince FET began supporting CS research, considerable progress has been made. Building on previous understanding of concepts such as emergence from interactions, far-from-equilibrium systems, border of chaos and self-organised criticality, recent CS research is now delivering rigorous theory through methods of statistical physics, network theory, and computer simulation. CS research increasingly demands high-throughput data streams and new ICT-based methods of observing and reconstructing, i.e. modelling, the dynamics from those data in areas as diverse as embryogenesis, neuroscience, transport, epidemics, linguistics, meteorology, and robotics. CS research is also beginning to address the problem of engineering robust systems of systems of systems that can adapt to changing environments, including the perplexing problem that ICT systems are too often fragile and non-adaptive.Recommendation: A Programme of Research in Complex Systems Science to Support ICTFundamental theory in Complex Systems Science is needed, but this can only be achieved through real-world applications involving large, heterogeneous, and messy data sets, including people and organisations. A long-term vision is needed. Realistic targets can be set. Fundamental research can be ensured by requiring that teams include mathematicians, computer scientists, physicists and computational social scientists.One research priority is to develop a formalism for multilevel systems of systems of systems, applicable to all areas including biology, economics, security, transportation, robotics, health, agriculture, ecology, and climate change. Another related research priority is a scientific perspective on the integration of the new science with policy and its implementation, including ethical problems related to privacy and equality.A further priority is the need for education in complex systems science. Conventional education continues to be domain-dominated, producing scientists who are for the most part still lacking fundamental knowledge in core areas of mathematics, computation, statistical physics, and social systems. Therefore:1. We recommend that FET fund a new programme of work in complex systems science as essential research for progress in the development of new kinds of ICT systems.2. We have identified the dynamics of multilevel systems as the area in complex systems science requiring a major paradigm shift, beyond which significant scientific progress cannot be made.3. We propose a call requiring: fundamental research in complex systems science; new mathematical and computational formalisms to be developed; involving a large ‘guinea pig’ organisation; research into policy and its meta-level information dynamics; and that all research staff have interdisciplinary knowledge through an education programme.Tangible outcomes, potential users of the new science, its impact and measures of successUsers include (i) the private and public sectors using ICT to manage complex systems and (ii) researchers in ICT, CSS, and all complex domains. The tangible output of a call will be new knowledge on the nature of complex systems in general, new knowledge of the particular complex system(s) studied, and new knowledge of the fundamental role played by ICT in the research and implementation to create real systems addressing real-world problems. The impact of the call will be seen through new high added-value opportunities in the public and private sectors, new high added-value ICT technologies, and new high added-value science to support innovation in ICT research and development. The measure of success will be through the delivery of these high added-value outcomes, and new science to better understand failures
- …
