366 research outputs found
Sensitivity of NEXT-100 detector to neutrinoless double beta decay
Nesta tese estúdiase a sensibilidade do detector NEXT-100 á desintegración dobre
beta sen neutrinos. Existe un gran interese na busca desta desintegración xa que
podería respostar preguntas fundamentais en física de neutrinos. O detector constitúe
a terceira fase do experimento NEXT, colaboración na que se desenrolou esta tese.
A continuación inclúese un resumo de cada un dos capítulos nos que se divide a
tese. Comézase introducindo o marco teórico e experimental nas seccións Física de
neutrinos, A busca da desintegración dobre beta sen neutrinos e O experimento
NEXT. Posteriormente descríbense a parte principal do análise da tese en Simulación
do detector, Procesamento de datos e Sensibilidade do detector NEXT-100
LIPIcs, Volume 261, ICALP 2023, Complete Volume
LIPIcs, Volume 261, ICALP 2023, Complete Volum
Easily decoded error correcting codes
This thesis is concerned with the decoding aspect of linear block error-correcting codes. When, as in most practical situations, the decoder cost is limited an optimum code may be inferior in performance to a longer sub-optimum code' of the same rate. This consideration is a central theme of the thesis.
The best methods available for decoding short optimum codes and long B.C.H. codes are discussed, in some cases new decoding algorithms for the codes are introduced.
Hashim's "Nested" codes are then analysed. The method of nesting codes which was given by Hashim is shown to be optimum - but it is seen that the codes are less easily decoded than was previously thought.
"Conjoined" codes are introduced. It is shown how two codes with identical numbers of information bits may be "conjoined" to give a code with length and minimum distance equal to the sum of the respective parameters of the constituent codes but with the same number of information bits. A very simple decoding algorithm is given for the codes whereby each constituent codeword is decoded and then a decision is made as to the correct decoding. A technique is given for adding more codewords to conjoined codes without unduly increasing the decoder complexity.
Lastly, "Array" codes are described. They are formed by making parity checks over carefully chosen patterns of information bits arranged in a two-dimensional array. Various methods are given for choosing suitable patterns. Some of the resulting codes are self-orthogonal and certain of these have parameters close to the optimum for such codes. A method is given for adding more codewords to array codes, derived from a process of augmentation known for product codes
High-Speed Area-Efficient Hardware Architecture for the Efficient Detection of Faults in a Bit-Parallel Multiplier Utilizing the Polynomial Basis of GF(2m)
The utilization of finite field multipliers is pervasive in contemporary
digital systems, with hardware implementation for bit parallel operation often
necessitating millions of logic gates. However, various digital design issues,
whether natural or stemming from soft errors, can result in gate malfunction,
ultimately leading to erroneous multiplier outputs. Thus, to prevent
susceptibility to error, it is imperative to employ an effective finite field
multiplier implementation that boasts a robust fault detection capability. This
study proposes a novel fault detection scheme for a recent bit-parallel
polynomial basis multiplier over GF(2m), intended to achieve optimal fault
detection performance for finite field multipliers while simultaneously
maintaining a low-complexity implementation, a favored attribute in
resource-constrained applications like smart cards. The primary concept behind
the proposed approach is centered on the implementation of a BCH decoder that
utilizes re-encoding technique and FIBM algorithm in its first and second
sub-modules, respectively. This approach serves to address hardware complexity
concerns while also making use of Berlekamp-Rumsey-Solomon (BRS) algorithm and
Chien search method in the third sub-module of the decoder to effectively
locate errors with minimal delay. The results of our synthesis indicate that
our proposed error detection and correction architecture for a 45-bit
multiplier with 5-bit errors achieves a 37% and 49% reduction in critical path
delay compared to existing designs. Furthermore, the hardware complexity
associated with a 45-bit multiplicand that contains 5 errors is confined to a
mere 80%, which is significantly lower than the most exceptional BCH-based
fault recognition methodologies, including TMR, Hamming's single error
correction, and LDPC-based procedures within the realm of finite field
multiplication.Comment: 9 pages, 4 figures. arXiv admin note: substantial text overlap with
arXiv:2209.1338
Recommended from our members
Computational Methods in Multi-Messenger Astrophysics using Gravitational Waves and High Energy Neutrinos
This dissertation seeks to describe advancements made in computational methods for multi-messenger astrophysics (MMA) using gravitational waves GW and neutrinos during Advanced LIGO (aLIGO)’s first through third observing runs (O1-O3) and, looking forward, to describe novel computational techniques suited to the challenges of both the burgeoning MMA field and high-performance computing as a whole.
The first two chapters provide an overview of MMA as it pertains to gravitational wave/high energy neutrino (GWHEN) searches, including a summary of expected astrophysical sources as well as GW, neutrino, and gamma-ray detectors used in their detection. These are followed in the third chapter by an in-depth discussion of LIGO’s timing system, particularly the diagnostic subsystem, describing both its role in MMA searches and the author’s contributions to the system itself.
The fourth chapter provides a detailed description of the Low-Latency Algorithm for Multi-messenger Astrophysics (LLAMA), the GWHEN pipeline developed by the author and used in O2 and O3. Relevant past multi-messenger searches are described first, followed by the O2 and O3 analysis methods, the pipeline’s performance, scientific results, and finally, an in-depth account of the library’s structure and functionality. In particular, the author’s high-performance multi-order coordinates (MOC) HEALPix image analysis library, HPMOC, is described. HPMOC increases performance of HEALPix image manipulations by several orders of magnitude vs. naive single-resolution approaches while presenting a simple high-level interface and should prove useful for diverse future MMA searches. The performance improvements it provides for LLAMA are also covered.
The final chapter of this dissertation builds on the approaches taken in developing HPMOC, presenting several novel methods for efficiently storing and analyzing large data sets, with applications to MMA and other data-intensive fields. A family of depth-first multi-resolution ordering of HEALPix images — DEPTH9, DEPTH19, and DEPTH40 — is defined, along with algorithms and use cases where it can improve on current approaches, including high-speed streaming calculations suitable for serverless compute or FPGAs.
For performance-constrained analyses on HEALPix data (e.g. image analysis in multi-messenger search pipelines) using SIMD processors, breadth-first data structures can provide short-circuiting calculations in a data-parallel way on compressed data; a simple compression method is described with application to further improving LLAMA performance.
A new storage scheme and associated algorithms for efficiently compressing and contracting tensors of varying sparsity is presented; these demuxed tensors (D-Tensors) have equivalent asymptotic time and space complexity to optimal representations of both dense and sparse matrices, and could be used as a universal drop-in replacement to reduce code complexity and developer effort while improving performance of existing non-optimized numerical code. Finally, the big bucket hash table (B-Table), a novel type of hash table making guarantees on data layout (vs. load factor), is described, along with optimizations it allows for (like hardware acceleration, online rebuilds, and hard realtime applications) that are not possible with existing hash table approaches. These innovations are presented in the hope that some will prove useful for improving future MMA searches and other data-intensive applications
- …