240 research outputs found
Multilevel LDPC Lattices with Efficient Encoding and Decoding and a Generalization of Construction D'
Lattice codes are elegant and powerful structures that not only can achieve
the capacity of the AWGN channel but are also a key ingredient to many
multiterminal schemes that exploit linearity properties. However, constructing
lattice codes that can realize these benefits with low complexity is still a
challenging problem. In this paper, efficient encoding and decoding algorithms
are proposed for multilevel binary LDPC lattices constructed via Construction
D' whose complexity is linear in the total number of coded bits. Moreover, a
generalization of Construction D' is proposed that relaxes some of the nesting
constraints on the parity-check matrices of the component codes, leading to a
simpler and improved design. Based on this construction, low-complexity
multilevel LDPC lattices are designed whose performance under multistage
decoding is comparable to that of polar lattices and close to that of
low-density lattice codes (LDLC) on the power-unconstrained AWGN channel.Comment: 15 pages, 4 figures. To appear in IEEE Transactions on Information
Theor
A Practical Approach to Lossy Joint Source-Channel Coding
This work is devoted to practical joint source channel coding. Although the
proposed approach has more general scope, for the sake of clarity we focus on a
specific application example, namely, the transmission of digital images over
noisy binary-input output-symmetric channels. The basic building blocks of most
state-of the art source coders are: 1) a linear transformation; 2) scalar
quantization of the transform coefficients; 3) probability modeling of the
sequence of quantization indices; 4) an entropy coding stage. We identify the
weakness of the conventional separated source-channel coding approach in the
catastrophic behavior of the entropy coding stage. Hence, we replace this stage
with linear coding, that maps directly the sequence of redundant quantizer
output symbols into a channel codeword. We show that this approach does not
entail any loss of optimality in the asymptotic regime of large block length.
However, in the practical regime of finite block length and low decoding
complexity our approach yields very significant improvements. Furthermore, our
scheme allows to retain the transform, quantization and probability modeling of
current state-of the art source coders, that are carefully matched to the
features of specific classes of sources. In our working example, we make use of
``bit-planes'' and ``contexts'' model defined by the JPEG2000 standard and we
re-interpret the underlying probability model as a sequence of conditionally
Markov sources. The Markov structure allows to derive a simple successive
coding and decoding scheme, where the latter is based on iterative Belief
Propagation. We provide a construction example of the proposed scheme based on
punctured Turbo Codes and we demonstrate the gain over a conventional separated
scheme by running extensive numerical experiments on test images.Comment: 51 pages, submitted to IEEE Transactions on Information Theor
Representation Learning: A Review and New Perspectives
The success of machine learning algorithms generally depends on data
representation, and we hypothesize that this is because different
representations can entangle and hide more or less the different explanatory
factors of variation behind the data. Although specific domain knowledge can be
used to help design representations, learning with generic priors can also be
used, and the quest for AI is motivating the design of more powerful
representation-learning algorithms implementing such priors. This paper reviews
recent work in the area of unsupervised feature learning and deep learning,
covering advances in probabilistic models, auto-encoders, manifold learning,
and deep networks. This motivates longer-term unanswered questions about the
appropriate objectives for learning good representations, for computing
representations (i.e., inference), and the geometrical connections between
representation learning, density estimation and manifold learning
Analytic and Machine Learning Based Design of Monolithic Transistor-Antenna for Plasmonic Millimeter-Wave Detectors
Department of Electrical EngineeringThis thesis reports an advanced analysis on a monolithic transistor-antenna by designing a ring-type asymmetric FET itself as a receiving antenna element which receives millimeter-waves in a loss-less manner with a plasmonic ampli fication for millimeter-wave (mmW) detectors. The proposed transistor-antenna device combines the plasmonic and the electromagnetic (EM) aspects in a single place. As a result, it can absorb the incoming mmW and transfer power directly to the ring-type asymmetric channel without any feeding line and a separate antenna element. Both the charge asymmetry in the device channel and the antenna coupling are contributing to the enhanced photoresponse. Among the two factors, the improved antenna coupling is more dominant in the performance enhancement of our proposed design. Also, our transistor-antenna device have enhanced performance with a uniformly enhanced responsivity of every pixel by characterizing its impedance exactly pursuing real-time mmW imaging. Operation principle of the proposed device is discussed, focusing on how signal transmission through the ring-type structure is available without any feeding line between the antenna and the detector. To determine the antenna geometry aiming for a desired resonant frequency, we present an efficient design procedure based on periodic bandgap analysis combined with parametric electromagnetic simulations. From a fabricated ring-type FET-based monolithic antenna device, we demonstrated the highly enhanced optical responsivity and the reduced optical noise-equivalent power, which are in comparable order with the reported state-of-the-art CMOS-based antenna integrated direct detectors.
Another part of the thesis focuses on developing machine learning models to enable fast, accurate design and veri fication of electromagnetic structures. We proposed a novel Bayesian learning algorithm named as Bayesian clique learning, for searching the optimal electromagnetic design parameter by using the structural property of EM simulation data set. Along with this, we also given an inverse problem approach for designing the electromagnetic structures which suggest going in the opposite direction to determine the design parameters from characteristics of the desired output.clos
Repeat-Accumulate Signal Codes
State-constrained signal codes directly encode modulation signals using
signal processing filters, the coefficients of which are constrained over the
rings of formal power series. Although the performance of signal codes is
defined by these signal filters, optimal filters must be found by brute-force
search in terms of symbol error rate because the asymptotic behavior with
different filters has not been investigated. Moreover, computational complexity
of the conventional BCJR used in the decoder increases exponentially as the
number of output constellations increase. We hence propose a new class of
state-constrained signal codes called repeat-accumulate signal codes (RASCs).
To analyze the asymptotic behavior of these codes, we employ Monte Carlo
density evolution (MC-DE). As a result, the optimum filters can be efficiently
found for given parameters of the encoder. We also introduce a low-complexity
decoding algorithm for RASCs called the extended min-sum (EMS) decoder. The
MC-DE analysis shows that the difference between noise thresholds of RASC and
the Shannon limit is within 0.8 dB. Simulation results moreover show that the
EMS decoder can reduce the computational complexity to less than 25 % of that
of conventional decoder without degrading the performance by more than 1 dB.Comment: accepted for publication in IEEE Transactions on Communication
Statistical physics of inference: Thresholds and algorithms
Many questions of fundamental interest in todays science can be formulated as
inference problems: Some partial, or noisy, observations are performed over a
set of variables and the goal is to recover, or infer, the values of the
variables based on the indirect information contained in the measurements. For
such problems, the central scientific questions are: Under what conditions is
the information contained in the measurements sufficient for a satisfactory
inference to be possible? What are the most efficient algorithms for this task?
A growing body of work has shown that often we can understand and locate these
fundamental barriers by thinking of them as phase transitions in the sense of
statistical physics. Moreover, it turned out that we can use the gained
physical insight to develop new promising algorithms. Connection between
inference and statistical physics is currently witnessing an impressive
renaissance and we review here the current state-of-the-art, with a pedagogical
focus on the Ising model which formulated as an inference problem we call the
planted spin glass. In terms of applications we review two classes of problems:
(i) inference of clusters on graphs and networks, with community detection as a
special case and (ii) estimating a signal from its noisy linear measurements,
with compressed sensing as a case of sparse estimation. Our goal is to provide
a pedagogical review for researchers in physics and other fields interested in
this fascinating topic.Comment: 86 pages, 16 Figures. Review article based on HDR thesis of the first
author and lecture notes of the secon
Generalized Approximate Message Passing for Estimation with Random Linear Mixing
We consider the estimation of an i.i.d.\ random vector observed through a
linear transform followed by a componentwise, probabilistic (possibly
nonlinear) measurement channel. A novel algorithm, called generalized
approximate message passing (GAMP), is presented that provides computationally
efficient approximate implementations of max-sum and sum-problem loopy belief
propagation for such problems. The algorithm extends earlier approximate
message passing methods to incorporate arbitrary distributions on both the
input and output of the transform and can be applied to a wide range of
problems in nonlinear compressed sensing and learning.
Extending an analysis by Bayati and Montanari, we argue that the asymptotic
componentwise behavior of the GAMP method under large, i.i.d. Gaussian
transforms is described by a simple set of state evolution (SE) equations. From
the SE equations, one can \emph{exactly} predict the asymptotic value of
virtually any componentwise performance metric including mean-squared error or
detection accuracy. Moreover, the analysis is valid for arbitrary input and
output distributions, even when the corresponding optimization problems are
non-convex. The results match predictions by Guo and Wang for relaxed belief
propagation on large sparse matrices and, in certain instances, also agree with
the optimal performance predicted by the replica method. The GAMP methodology
thus provides a computationally efficient methodology, applicable to a large
class of non-Gaussian estimation problems with precise asymptotic performance
guarantees.Comment: 22 pages, 5 figure
Data-Augmented Structure-Property Mapping for Accelerating Computational Design of Advanced Material Systems
abstract: Advanced material systems refer to materials that are comprised of multiple traditional constituents but complex microstructure morphologies, which lead to their superior properties over conventional materials. This dissertation is motivated by the grand challenge in accelerating the design of advanced material systems through systematic optimization with respect to material microstructures or processing settings. While optimization techniques have mature applications to a large range of engineering systems, their application to material design meets unique challenges due to the high dimensionality of microstructures and the high costs in computing process-structure-property (PSP) mappings. The key to addressing these challenges is the learning of material representations and predictive PSP mappings while managing a small data acquisition budget. This dissertation thus focuses on developing learning mechanisms that leverage context-specific meta-data and physics-based theories. Two research tasks will be conducted: In the first, we develop a statistical generative model that learns to characterize high-dimensional microstructure samples using low-dimensional features. We improve the data efficiency of a variational autoencoder by introducing a morphology loss to the training. We demonstrate that the resultant microstructure generator is morphology-aware when trained on a small set of material samples, and can effectively constrain the microstructure space during material design. In the second task, we investigate an active learning mechanism where new samples are acquired based on their violation to a theory-driven constraint on the physics-based model. We demonstrate using a topology optimization case that while data acquisition through the physics-based model is often expensive (e.g., obtaining microstructures through simulation or optimization processes), the evaluation of the constraint can be far more affordable (e.g., checking whether a solution is optimal or equilibrium). We show that this theory-driven learning algorithm can lead to much improved learning efficiency and generalization performance when such constraints can be derived. The outcomes of this research is a better understanding of how physics knowledge about material systems can be integrated into machine learning frameworks, in order to achieve more cost-effective and reliable learning of material representations and predictive models, which are essential to accelerate computational material design.Dissertation/ThesisDoctoral Dissertation Mechanical Engineering 201
- …