19 research outputs found
Spherical and Hyperbolic Toric Topology-Based Codes On Graph Embedding for Ising MRF Models: Classical and Quantum Topology Machine Learning
The paper introduces the application of information geometry to describe the
ground states of Ising models by utilizing parity-check matrices of cyclic and
quasi-cyclic codes on toric and spherical topologies. The approach establishes
a connection between machine learning and error-correcting coding. This
proposed approach has implications for the development of new embedding methods
based on trapping sets. Statistical physics and number geometry applied for
optimize error-correcting codes, leading to these embedding and sparse
factorization methods. The paper establishes a direct connection between DNN
architecture and error-correcting coding by demonstrating how state-of-the-art
architectures (ChordMixer, Mega, Mega-chunk, CDIL, ...) from the long-range
arena can be equivalent to of block and convolutional LDPC codes (Cage-graph,
Repeat Accumulate). QC codes correspond to certain types of chemical elements,
with the carbon element being represented by the mixed automorphism
Shu-Lin-Fossorier QC-LDPC code. The connections between Belief Propagation and
the Permanent, Bethe-Permanent, Nishimori Temperature, and Bethe-Hessian Matrix
are elaborated upon in detail. The Quantum Approximate Optimization Algorithm
(QAOA) used in the Sherrington-Kirkpatrick Ising model can be seen as analogous
to the back-propagation loss function landscape in training DNNs. This
similarity creates a comparable problem with TS pseudo-codeword, resembling the
belief propagation method. Additionally, the layer depth in QAOA correlates to
the number of decoding belief propagation iterations in the Wiberg decoding
tree. Overall, this work has the potential to advance multiple fields, from
Information Theory, DNN architecture design (sparse and structured prior graph
topology), efficient hardware design for Quantum and Classical DPU/TPU (graph,
quantize and shift register architect.) to Materials Science and beyond.Comment: 71 pages, 42 Figures, 1 Table, 1 Appendix. arXiv admin note: text
overlap with arXiv:2109.08184 by other author
Heterogeneous Networked Data Recovery from Compressive Measurements Using a Copula Prior
Large-scale data collection by means of wireless sensor network and
internet-of-things technology poses various challenges in view of the
limitations in transmission, computation, and energy resources of the
associated wireless devices. Compressive data gathering based on compressed
sensing has been proven a well-suited solution to the problem. Existing designs
exploit the spatiotemporal correlations among data collected by a specific
sensing modality. However, many applications, such as environmental monitoring,
involve collecting heterogeneous data that are intrinsically correlated. In
this study, we propose to leverage the correlation from multiple heterogeneous
signals when recovering the data from compressive measurements. To this end, we
propose a novel recovery algorithm---built upon belief-propagation
principles---that leverages correlated information from multiple heterogeneous
signals. To efficiently capture the statistical dependencies among diverse
sensor data, the proposed algorithm uses the statistical model of copula
functions. Experiments with heterogeneous air-pollution sensor measurements
show that the proposed design provides significant performance improvements
against state-of-the-art compressive data gathering and recovery schemes that
use classical compressed sensing, compressed sensing with side information, and
distributed compressed sensing.Comment: accepted to IEEE Transactions on Communication
D11.2 Consolidated results on the performance limits of wireless communications
Deliverable D11.2 del projecte europeu NEWCOM#The report presents the Intermediate Results of N# JRAs on Performance Limits of Wireless Communications and highlights the fundamental issues that have been investigated by the WP1.1. The report illustrates the Joint Research Activities (JRAs) already identified during the first year of the project which are currently ongoing. For each activity there is a description, an illustration of the adherence and relevance with the identified fundamental open issues, a short presentation of the preliminary results, and a roadmap for the joint research work in the next year. Appendices for each JRA give technical details on the scientific activity in each JRA.Peer ReviewedPreprin
High-Dimensional Information Detection based on Correlation Imaging Theory
Radar is a device that uses electromagnetic(EM) waves to detect targets; it can measure the position
parameters and motion parameters and extract target characteristics information by analyzing the
reflected signal from the target. From the perspective of the radar theoretical basis of physics, the
more than 70 years of development of radar are based on the EM field fluctuation theory of physics.
Many theories have been developed towards one-dimensional signal processing. For example, a
variety of threshold filtering have widely used as methods to resist interference during detection. The
optimal state estimation describes the propagation process of the statistical characteristics of the
target over time in the probability domain. Compressed sensing greatly improves the reconstructing
efficiency of the sparse signal. These theories are one-dimensional information processing. The
information obtained by them is a deterministic description of the EM field. The correlated imaging
technique is from the high-order coherence property of the EM field, which uses the fluctuation
characteristic of the EM field to realize non-local imaging. Correlated imaging radar, a combination of
correlated imaging techniques and modern information theory, will provide a novel remote sensing
detection and imaging method. More importantly, correlated imaging radar is a new research field.
Therefore, a complete theoretical frame and application system should be urgently built up and
improved.
Based on the coherence theory of the EM field, the work in this thesis explores the method of
determining the statistical characteristics of the EM field so that the high dimensional target
information can be detected, including theoretical analysis, principle design, imaging modes, target
detecting models, image reconstruction algorithms, the enhancement of visibility, and system design.
The simulations and real experiments are set up to prove the theory's validity and the systems'
feasibility
Novi algoritam za kompresiju seizmičkih podataka velike amplitudske rezolucije
Renewable sources cannot meet energy demand of a growing global market. Therefore, it is expected that oil & gas will remain a substantial sources of energy in a coming years. To find a new oil & gas deposits that would satisfy growing global energy demands, significant efforts are constantly involved in finding ways to increase efficiency of a seismic surveys. It is commonly considered that, in an initial phase of exploration and production of a new fields, high-resolution and high-quality images of the subsurface are of the great importance. As one part in the seismic data processing chain, efficient managing and delivering of a large data sets, that are vastly produced by the industry during seismic surveys, becomes extremely important in order to facilitate further seismic data processing and interpretation. In this respect, efficiency to a large extent relies on the efficiency of the compression scheme, which is often required to enable faster transfer and access to data, as well as efficient data storage. Motivated by the superior performance of High Efficiency Video Coding (HEVC), and driven by the rapid growth in data volume produced by seismic surveys, this work explores a 32 bits per pixel (b/p) extension of the HEVC codec for compression of seismic data. It is proposed to reassemble seismic slices in a format that corresponds to video signal and benefit from the coding gain achieved by HEVC inter mode, besides the possible advantages of the (still image) HEVC intra mode. To this end, this work modifies almost all components of the original HEVC codec to cater for high bit-depth coding of seismic data: Lagrange multiplier used in optimization of the coding parameters has been adapted to the new data statistics, core transform and quantization have been reimplemented to handle the increased bit-depth range, and modified adaptive binary arithmetic coder has been employed for efficient entropy coding. In addition, optimized block selection, reduced intra prediction modes, and flexible motion estimation are tested to adapt to the structure of seismic data. Even though the new codec after implementation of the proposed modifications goes beyond the standardized HEVC, it still maintains a generic HEVC structure, and it is developed under the general HEVC framework. There is no similar work in the field of the seismic data compression that uses the HEVC as a base codec setting. Thus, a specific codec design has been tailored which, when compared to the JPEG-XR and commercial wavelet-based codec, significantly improves the peak-signal-tonoise- ratio (PSNR) vs. compression ratio performance for 32 b/p seismic data. Depending on a proposed configurations, PSNR gain goes from 3.39 dB up to 9.48 dB. Also, relying on the specific characteristics of seismic data, an optimized encoder is proposed in this work. It reduces encoding time by 67.17% for All-I configuration on trace image dataset, and 67.39% for All-I, 97.96% for P2-configuration and 98.64% for B-configuration on 3D wavefield dataset, with negligible coding performance losses. As a side contribution of this work, HEVC is analyzed within all of its functional units, so that the presented work itself can serve as a specific overview of methods incorporated into the standard
A STUDY OF LINEAR ERROR CORRECTING CODES
Since Shannon's ground-breaking work in 1948, there have been two main development streams
of channel coding in approaching the limit of communication channels, namely classical coding
theory which aims at designing codes with large minimum Hamming distance and probabilistic
coding which places the emphasis on low complexity probabilistic decoding using long codes built
from simple constituent codes. This work presents some further investigations in these two channel
coding development streams.
Low-density parity-check (LDPC) codes form a class of capacity-approaching codes with sparse
parity-check matrix and low-complexity decoder Two novel methods of constructing algebraic binary
LDPC codes are presented. These methods are based on the theory of cyclotomic cosets, idempotents
and Mattson-Solomon polynomials, and are complementary to each other. The two methods
generate in addition to some new cyclic iteratively decodable codes, the well-known Euclidean and
projective geometry codes. Their extension to non binary fields is shown to be straightforward.
These algebraic cyclic LDPC codes, for short block lengths, converge considerably well under iterative
decoding. It is also shown that for some of these codes, maximum likelihood performance may
be achieved by a modified belief propagation decoder which uses a different subset of 7^ codewords
of the dual code for each iteration.
Following a property of the revolving-door combination generator, multi-threaded minimum
Hamming distance computation algorithms are developed. Using these algorithms, the previously
unknown, minimum Hamming distance of the quadratic residue code for prime 199 has been evaluated.
In addition, the highest minimum Hamming distance attainable by all binary cyclic codes
of odd lengths from 129 to 189 has been determined, and as many as 901 new binary linear codes
which have higher minimum Hamming distance than the previously considered best known linear
code have been found.
It is shown that by exploiting the structure of circulant matrices, the number of codewords
required, to compute the minimum Hamming distance and the number of codewords of a given
Hamming weight of binary double-circulant codes based on primes, may be reduced. A means
of independently verifying the exhaustively computed number of codewords of a given Hamming
weight of these double-circulant codes is developed and in coiyunction with this, it is proved that
some published results are incorrect and the correct weight spectra are presented. Moreover, it is
shown that it is possible to estimate the minimum Hamming distance of this family of prime-based
double-circulant codes.
It is shown that linear codes may be efficiently decoded using the incremental correlation Dorsch
algorithm. By extending this algorithm, a list decoder is derived and a novel, CRC-less error detection
mechanism that offers much better throughput and performance than the conventional ORG
scheme is described. Using the same method it is shown that the performance of conventional CRC
scheme may be considerably enhanced. Error detection is an integral part of an incremental redundancy
communications system and it is shown that sequences of good error correction codes,
suitable for use in incremental redundancy communications systems may be obtained using the
Constructions X and XX. Examples are given and their performances presented in comparison to
conventional CRC schemes
The selective updating of working memory: a predictive coding account
Goal-relevant information maintained in working memory is remarkably robust and resistant to distractions. However, our nervous system is endowed with exceptional flexibility; therefore such information can be updated almost effortlessly. A scenario – not uncommon in our daily life – is that selective maintaining and updating information can be achieved concurrently. This is an intriguing example of how our brain balances stability and flexibility, when organising its knowledge. A possibility – one may draw upon to understand this capacity – is that working memory is represented as beliefs, or its probability densities, which are updated in a context-sensitive manner. This means one could treat working memory in the same way as perception – i.e., memories are based on inferring the cause of sensations, except that the time scale ranges from an instant to prolonged anticipation. In this setting, working memory is susceptible to prior information encoded in the brain’s model of its world. This thesis aimed to establish an interpretation of working memory processing that rests on the (generalised) predictive coding framework, or hierarchical inference in the brain. Specifically, the main question it asked was how anticipation modulates working memory updating (or maintenance). A novel working memory updating task was designed in this regard. Blood-oxygen-level dependent (BOLD) imaging, machine learning, and dynamic causal modelling (DCM) were applied to identify the neural correlates of anticipation and the violation of anticipation, as well as the causal structure generating these neural correlates. Anticipation induced neural activity in the dopaminergic midbrain and the striatum. Whereas, the fronto-parietal and cingulo-operculum network were implicated when an anticipated update was omitted, and the midbrain, occipital cortices, and cerebellum when an update was unexpected. DCM revealed that anticipation is a modulation of backward connections, whilst the associated surprise is mediated by forward and local recurrent modulations. Two mutually antagonistic pathways were differentially modulated under anticipatory flexibility and stability, respectively. The overall results indicate that working memory may as well follow the cortical message-passing scheme that enables hierarchical inference
Attuning to the mathematics of difference: Haptic constructions of number.
CAPTeaM develops and trials activities that Challenge Ableist Perspectives on the Teaching of Mathematics. The project involves teachers and researchers from the UK and Brazil in reflecting upon the practices that enable or disable the participation of disabled learners in mathematics. In this paper, we focus on two themes that emerged from data analyses generated in the first phase of the study: deconstructing the notion of the normal mathematics student/classroom and attuning mathematics teaching strategies to student diversity. Here, we address these themes through exemplifying participants’ haptic constructions of number in the context of a multiplication task in terms of four strategies they devise: “counting fingers”; “tracing the sum”; “negotiating signs to indicate place value”; “decomposing”