21,405 research outputs found
A Belief Propagation Based Framework for Soft Multiple-Symbol Differential Detection
Soft noncoherent detection, which relies on calculating the \textit{a
posteriori} probabilities (APPs) of the bits transmitted with no channel
estimation, is imperative for achieving excellent detection performance in
high-dimensional wireless communications. In this paper, a high-performance
belief propagation (BP)-based soft multiple-symbol differential detection
(MSDD) framework, dubbed BP-MSDD, is proposed with its illustrative application
in differential space-time block-code (DSTBC)-aided ultra-wideband impulse
radio (UWB-IR) systems. Firstly, we revisit the signal sampling with the aid of
a trellis structure and decompose the trellis into multiple subtrellises.
Furthermore, we derive an APP calculation algorithm, in which the
forward-and-backward message passing mechanism of BP operates on the
subtrellises. The proposed BP-MSDD is capable of significantly outperforming
the conventional hard-decision MSDDs. However, the computational complexity of
the BP-MSDD increases exponentially with the number of MSDD trellis states. To
circumvent this excessive complexity for practical implementations, we
reformulate the BP-MSDD, and additionally propose a Viterbi algorithm
(VA)-based hard-decision MSDD (VA-HMSDD) and a VA-based soft-decision MSDD
(VA-SMSDD). Moreover, both the proposed BP-MSDD and VA-SMSDD can be exploited
in conjunction with soft channel decoding to obtain powerful iterative
detection and decoding based receivers. Simulation results demonstrate the
effectiveness of the proposed algorithms in DSTBC-aided UWB-IR systems.Comment: 14 pages, 12 figures, 3 tables, accepted to appear on IEEE
Transactions on Wireless Communications, Aug. 201
Using Quantum Computers for Quantum Simulation
Numerical simulation of quantum systems is crucial to further our
understanding of natural phenomena. Many systems of key interest and
importance, in areas such as superconducting materials and quantum chemistry,
are thought to be described by models which we cannot solve with sufficient
accuracy, neither analytically nor numerically with classical computers. Using
a quantum computer to simulate such quantum systems has been viewed as a key
application of quantum computation from the very beginning of the field in the
1980s. Moreover, useful results beyond the reach of classical computation are
expected to be accessible with fewer than a hundred qubits, making quantum
simulation potentially one of the earliest practical applications of quantum
computers. In this paper we survey the theoretical and experimental development
of quantum simulation using quantum computers, from the first ideas to the
intense research efforts currently underway.Comment: 43 pages, 136 references, review article, v2 major revisions in
response to referee comments, v3 significant revisions, identical to
published version apart from format, ArXiv version has table of contents and
references in alphabetical orde
Learning action-oriented models through active inference
Converging theories suggest that organisms learn and exploit probabilistic models of their environment. However, it remains unclear how such models can be learned in practice. The open-ended complexity of natural environments means that it is generally infeasible for organisms to model their environment comprehensively. Alternatively, action-oriented models attempt to encode a parsimonious representation of adaptive agent-environment interactions. One approach to learning action-oriented models is to learn online in the presence of goal-directed behaviours. This constrains an agent to behaviourally relevant trajectories, reducing the diversity of the data a model need account for. Unfortunately, this approach can cause models to prematurely converge to sub-optimal solutions, through a process we refer to as a bad-bootstrap. Here, we exploit the normative framework of active inference to show that efficient action-oriented models can be learned by balancing goal-oriented and epistemic (information-seeking) behaviours in a principled manner. We illustrate our approach using a simple agent-based model of bacterial chemotaxis. We first demonstrate that learning via goal-directed behaviour indeed constrains models to behaviorally relevant aspects of the environment, but that this approach is prone to sub-optimal convergence. We then demonstrate that epistemic behaviours facilitate the construction of accurate and comprehensive models, but that these models are not tailored to any specific behavioural niche and are therefore less efficient in their use of data. Finally, we show that active inference agents learn models that are parsimonious, tailored to action, and which avoid bad bootstraps and sub-optimal convergence. Critically, our results indicate that models learned through active inference can support adaptive behaviour in spite of, and indeed because of, their departure from veridical representations of the environment. Our approach provides a principled method for learning adaptive models from limited interactions with an environment, highlighting a route to sample efficient learning algorithms
Sparse Signal Processing Concepts for Efficient 5G System Design
As it becomes increasingly apparent that 4G will not be able to meet the
emerging demands of future mobile communication systems, the question what
could make up a 5G system, what are the crucial challenges and what are the key
drivers is part of intensive, ongoing discussions. Partly due to the advent of
compressive sensing, methods that can optimally exploit sparsity in signals
have received tremendous attention in recent years. In this paper we will
describe a variety of scenarios in which signal sparsity arises naturally in 5G
wireless systems. Signal sparsity and the associated rich collection of tools
and algorithms will thus be a viable source for innovation in 5G wireless
system design. We will discribe applications of this sparse signal processing
paradigm in MIMO random access, cloud radio access networks, compressive
channel-source network coding, and embedded security. We will also emphasize
important open problem that may arise in 5G system design, for which sparsity
will potentially play a key role in their solution.Comment: 18 pages, 5 figures, accepted for publication in IEEE Acces
Polynomial tuning of multiparametric combinatorial samplers
Boltzmann samplers and the recursive method are prominent algorithmic
frameworks for the approximate-size and exact-size random generation of large
combinatorial structures, such as maps, tilings, RNA sequences or various
tree-like structures. In their multiparametric variants, these samplers allow
to control the profile of expected values corresponding to multiple
combinatorial parameters. One can control, for instance, the number of leaves,
profile of node degrees in trees or the number of certain subpatterns in
strings. However, such a flexible control requires an additional non-trivial
tuning procedure. In this paper, we propose an efficient polynomial-time, with
respect to the number of tuned parameters, tuning algorithm based on convex
optimisation techniques. Finally, we illustrate the efficiency of our approach
using several applications of rational, algebraic and P\'olya structures
including polyomino tilings with prescribed tile frequencies, planar trees with
a given specific node degree distribution, and weighted partitions.Comment: Extended abstract, accepted to ANALCO2018. 20 pages, 6 figures,
colours. Implementation and examples are available at [1]
https://github.com/maciej-bendkowski/boltzmann-brain [2]
https://github.com/maciej-bendkowski/multiparametric-combinatorial-sampler
Expander -Decoding
We introduce two new algorithms, Serial- and Parallel- for
solving a large underdetermined linear system of equations when it is known that has at most
nonzero entries and that is the adjacency matrix of an unbalanced left
-regular expander graph. The matrices in this class are sparse and allow a
highly efficient implementation. A number of algorithms have been designed to
work exclusively under this setting, composing the branch of combinatorial
compressed-sensing (CCS).
Serial- and Parallel- iteratively minimise by successfully combining two desirable features of previous CCS
algorithms: the information-preserving strategy of ER, and the parallel
updating mechanism of SMP. We are able to link these elements and guarantee
convergence in operations by assuming that the signal
is dissociated, meaning that all of the subset sums of the support of
are pairwise different. However, we observe empirically that the signal need
not be exactly dissociated in practice. Moreover, we observe Serial-
and Parallel- to be able to solve large scale problems with a larger
fraction of nonzeros than other algorithms when the number of measurements is
substantially less than the signal length; in particular, they are able to
reliably solve for a -sparse vector from expander
measurements with and up to four times greater than what is
achievable by -regularization from dense Gaussian measurements.
Additionally, Serial- and Parallel- are observed to be able to
solve large problems sizes in substantially less time than other algorithms for
compressed sensing. In particular, Parallel- is structured to take
advantage of massively parallel architectures.Comment: 14 pages, 10 figure
- …