366 research outputs found

    Low-Density Parity-Check Codes From Transversal Designs With Improved Stopping Set Distributions

    Full text link
    This paper examines the construction of low-density parity-check (LDPC) codes from transversal designs based on sets of mutually orthogonal Latin squares (MOLS). By transferring the concept of configurations in combinatorial designs to the level of Latin squares, we thoroughly investigate the occurrence and avoidance of stopping sets for the arising codes. Stopping sets are known to determine the decoding performance over the binary erasure channel and should be avoided for small sizes. Based on large sets of simple-structured MOLS, we derive powerful constraints for the choice of suitable subsets, leading to improved stopping set distributions for the corresponding codes. We focus on LDPC codes with column weight 4, but the results are also applicable for the construction of codes with higher column weights. Finally, we show that a subclass of the presented codes has quasi-cyclic structure which allows low-complexity encoding.Comment: 11 pages; to appear in "IEEE Transactions on Communications

    Absorbing Set Analysis and Design of LDPC Codes from Transversal Designs over the AWGN Channel

    Full text link
    In this paper we construct low-density parity-check (LDPC) codes from transversal designs with low error-floors over the additive white Gaussian noise (AWGN) channel. The constructed codes are based on transversal designs that arise from sets of mutually orthogonal Latin squares (MOLS) with cyclic structure. For lowering the error-floors, our approach is twofold: First, we give an exhaustive classification of so-called absorbing sets that may occur in the factor graphs of the given codes. These purely combinatorial substructures are known to be the main cause of decoding errors in the error-floor region over the AWGN channel by decoding with the standard sum-product algorithm (SPA). Second, based on this classification, we exploit the specific structure of the presented codes to eliminate the most harmful absorbing sets and derive powerful constraints for the proper choice of code parameters in order to obtain codes with an optimized error-floor performance.Comment: 15 pages. arXiv admin note: text overlap with arXiv:1306.511

    Structural Design and Analysis of Low-Density Parity-Check Codes and Systematic Repeat-Accumulate Codes

    Get PDF
    The discovery of two fundamental error-correcting code families, known as turbo codes and low-density parity-check (LDPC) codes, has led to a revolution in coding theory and to a paradigm shift from traditional algebraic codes towards modern graph-based codes that can be decoded by iterative message passing algorithms. From then on, it has become a focal point of research to develop powerful LDPC and turbo-like codes. Besides the classical domain of randomly constructed codes, an alternative and competitive line of research is concerned with highly structured LDPC and turbo-like codes based on combinatorial designs. Such codes are typically characterized by high code rates already at small to moderate code lengths and good code properties such as the avoidance of harmful 4-cycles in the code's factor graph. Furthermore, their structure can usually be exploited for an efficient implementation, in particular, they can be encoded with low complexity as opposed to random-like codes. Hence, these codes are suitable for high-speed applications such as magnetic recording or optical communication. This thesis greatly contributes to the field of structured LDPC codes and systematic repeat-accumulate (sRA) codes as a subclass of turbo-like codes by presenting new combinatorial construction techniques and algebraic methods for an improved code design. More specifically, novel and infinite families of high-rate structured LDPC codes and sRA codes are presented based on balanced incomplete block designs (BIBDs), which form a subclass of combinatorial designs. Besides of showing excellent error-correcting capabilites under iterative decoding, these codes can be implemented efficiently, since their inner structure enables low-complexity encoding and accelerated decoding algorithms. A further infinite series of structured LDPC codes is presented based on the notion of transversal designs, which form another subclass of combinatorial designs. By a proper configuration of these codes, they reveal an excellent decoding performance under iterative decoding, in particular, with very low error-floors. The approach for lowering these error-floors is threefold. First, a thorough analysis of the decoding failures is carried out, resulting in an extensive classification of so-called stopping sets and absorbing sets. These combinatorial entities are known to be the main cause of decoding failures in the error-floor region over the binary erasure channel (BEC) and additive white Gaussian noise (AWGN) channel, respectively. Second, the specific code structures are exploited in order to calculate conditions for the avoidance of the most harmful stopping and absorbing sets. Third, powerful design strategies are derived for the identification of those code instances with the best error-floor performances. The resulting codes can additionally be encoded with low complexity and thus are ideally suited for practical high-speed applications. Further investigations are carried out on the infinite family of structured LDPC codes based on finite geometries. It is known that these codes perform very well under iterative decoding and that their encoding can be achieved with low complexity. By combining the latest findings in the fields of finite geometries and combinatorial designs, we generate new theoretical insights about the decoding failures of such codes under iterative decoding. These examinations finally help to identify the geometric codes with the most beneficial error-correcting capabilities over the BEC

    High-Rate Quantum Low-Density Parity-Check Codes Assisted by Reliable Qubits

    Get PDF
    Quantum error correction is an important building block for reliable quantum information processing. A challenging hurdle in the theory of quantum error correction is that it is significantly more difficult to design error-correcting codes with desirable properties for quantum information processing than for traditional digital communications and computation. A typical obstacle to constructing a variety of strong quantum error-correcting codes is the complicated restrictions imposed on the structure of a code. Recently, promising solutions to this problem have been proposed in quantum information science, where in principle any binary linear code can be turned into a quantum error-correcting code by assuming a small number of reliable quantum bits. This paper studies how best to take advantage of these latest ideas to construct desirable quantum error-correcting codes of very high information rate. Our methods exploit structured high-rate low-density parity-check codes available in the classical domain and provide quantum analogues that inherit their characteristic low decoding complexity and high error correction performance even at moderate code lengths. Our approach to designing high-rate quantum error-correcting codes also allows for making direct use of other major syndrome decoding methods for linear codes, making it possible to deal with a situation where promising quantum analogues of low-density parity-check codes are difficult to find

    Improving Group Integrity of Tags in RFID Systems

    Get PDF
    Checking the integrity of groups containing radio frequency identification (RFID) tagged objects or recovering the tag identifiers of missing objects is important in many activities. Several autonomous checking methods have been proposed for increasing the capability of recovering missing tag identifiers without external systems. This has been achieved by treating a group of tag identifiers (IDs) as packet symbols encoded and decoded in a way similar to that in binary erasure channels (BECs). Redundant data are required to be written into the limited memory space of RFID tags in order to enable the decoding process. In this thesis, the group integrity of passive tags in RFID systems is specifically targeted, with novel mechanisms being proposed to improve upon the current state of the art. Due to the sparseness property of low density parity check (LDPC) codes and the mitigation of the progressive edge-growth (PEG) method for short cycles, the research is begun with the use of the PEG method in RFID systems to construct the parity check matrix of LDPC codes in order to increase the recovery capabilities with reduced memory consumption. It is shown that the PEG-based method achieves significant recovery enhancements compared to other methods with the same or less memory overheads. The decoding complexity of the PEG-based LDPC codes is optimised using an improved hybrid iterative/Gaussian decoding algorithm which includes an early stopping criterion. The relative complexities of the improved algorithm are extensively analysed and evaluated, both in terms of decoding time and the number of operations required. It is demonstrated that the improved algorithm considerably reduces the operational complexity and thus the time of the full Gaussian decoding algorithm for small to medium amounts of missing tags. The joint use of the two decoding components is also adapted in order to avoid the iterative decoding when the missing amount is larger than a threshold. The optimum value of the threshold value is investigated through empirical analysis. It is shown that the adaptive algorithm is very efficient in decreasing the average decoding time of the improved algorithm for large amounts of missing tags where the iterative decoding fails to recover any missing tag. The recovery performances of various short-length irregular PEG-based LDPC codes constructed with different variable degree sequences are analysed and evaluated. It is demonstrated that the irregular codes exhibit significant recovery enhancements compared to the regular ones in the region where the iterative decoding is successful. However, their performances are degraded in the region where the iterative decoding can recover some missing tags. Finally, a novel protocol called the Redundant Information Collection (RIC) protocol is designed to filter and collect redundant tag information. It is based on a Bloom filter (BF) that efficiently filters the redundant tag information at the tag’s side, thereby considerably decreasing the communication cost and consequently, the collection time. It is shown that the novel protocol outperforms existing possible solutions by saving from 37% to 84% of the collection time, which is nearly four times the lower bound. This characteristic makes the RIC protocol a promising candidate for collecting redundant tag information in the group integrity of tags in RFID systems and other similar ones

    Development of time projection chambers with micromegas for Rare Event Searches

    Get PDF
    The Rare Event Searches is a heterogeneous field from the point of view of their physical motivations: double betha neutrinoless decay experiments, direct detection of WIMPs as well as axions and other WISPs (candidates for the DM, but also motivated by other questions from Particle Physics). The field is rather defined by the requirements of these experiments, essentially a very sensitive detector with low background which is usually operated in underground laboratories. The availability of a rich description of the event registered by the detector is a powerful tool for the discrimination of the signal from the background. The topological description of the interaction that can be delivered by a gaseous TPC is a useful source of information about the event. The generic requirements for a gaseous TPC that is intended for rare event searches are very good imaging capabilities, high gain and efficiency, stability and reliabiligy and radiopurity, which could imply working with particular gases, in absence of quencher and at high pressure, high granularity and the use of state-of-the-art electronics, and everything must be scalable to higher detectors. Such requirements could be fulfilled by TPCs because they are equipped with Micro-Pattern Gas Detectors, like Micromegas. The phenomenology of TPCs is studied in detail and R&D activities to its application to rare event searches are reported, in particular regarding microbulk micromegas, the latest manufacturing technique. A big part of the work has been devoted to the development of libraries and programs for generic Monte Carlo simulations on low energy TPCs and micromegas specific processes (primary charge generation, drift processes, implementation of the readout, generation of the electronic signals) and associated tools for information management and interpretation of the results. The role micromegas detectors have played in the CAST (CERN Axion Solar Telescope) experiment is reviewed, describing the strategies followed to improve the background more than a factor of 50, since the beginning of the experiment to 2011. To provide more precise guidelines aiming to continue and accelerate the encouraging evolution of micromegas backgound in CAST and to deliver prospects for IAXO (International AXion Observatory) an study on the CAST micromegas background is carried out relying in both simulations and tests-benches. Underground operation of CAST detector with a heavy shielding (at least 10 cm lead thickness) and improved radiopurity produced a background about 30 times lower than CAST nominal background, demonstrating the potencial of the detectors. The success of the 2012 upgrade of two of the CAST micromegas detectors, leading to an improvement of a factor 5 in background level, has been the first application/confirmation of the conclusions from these studies. In conclusion, the prospects to the application of micromegas to rare event searchers are encouraging for the issues that were proposed. The tests on the different aspects of the micromegas operation that are demanded by rare event searches (high pressure, particular mixtures, absence of quench) produced encouraging results. Moreover the state-of-the-art micromegas manufacturing technique, microbulk, has been measured to be radiopure. The impressive progression of the background of CAST micromegas detectors may be the most significant milestone. There has been an important advance in the understanding of the background nature, the potential of the different applied strategies and the way the detector performance and the analysis methods interact with the different kinds of background events. It can be assured that this progression, which have improved more than two orders of magnitude from the first micromegas installation, will not stop in the present sunset background level, and the future IAXO helioscope will be provided with more sensitive micromegas detectors. The ultra-low background obtained in the LSC (which is only an upper bound, probably no a real limit for the micromegas) is one of the facts that support this assertion. But its significance goes beyond the application to helioscopes. It demonstrates the possibility of registering ultra-low background below 10 keV with a low energy threshold

    Development of a positron emission tomograph for “in-vivo” dosimetry in hadrontherapy

    Get PDF
    This thesis is related to the DoPET project, which aims to evaluate the feasibility of a dedicated Positron Emission Tomograph (PET) for measuring, monitoring, and verifying the radiation dose that is being delivered to the patient during hadrontherapy. Radiation therapy with protons and heavier ions is becoming a more common treatment option, with many new centers under construction or at planning stage worldwide. The main physical advantage of these new treatment modalities is the high selectivity in the dose delivery: very little dose is deposited in healthy tissues beyond the particles’ range. However, in clinical practice the beam path in the patient is not exactly known. This affects the quality of the treatment planning, and may compromise the translation of the physical advantage into a clinical benefit. The use of a PET system immediately after the therapeutical irradiation (“in-beam”) for in-vivo imaging of the tissue + activation produced by nuclear reactions of the ion beam with the target, could help to have a better control of the treatment delivery. The DoPET project, based on an Italian INFN collaboration, aims to explore one possible approach to the hadron-driven PET technique, through the development of a dedicated device. Such goal was reached through the validation of a PET prototype with proton irradiations on plastic phantoms at the CATANA proton therapy facility (LNS-INFN, Catania, Italy) and with carbon irradiations on plastic phantoms at the GSI synchrotron (Darmstadt, Germany). A preliminary comparison with an existing in-beam PET device was also performed. The candidate was involved with all aspects of this project, specifically the Monte Carlo simulations of the physical processes at the basis of phantom activation, the measurements for the characterization of the DoPET detector, the improvement of the image reconstruction algorithm, and the extensive measurements in plastic phantoms. The system and the methods described in this thesis have to be considered as a proof of principle, and the promising results justify a larger effort for the construction of a clinical system

    A Novel Liquid Argon Time Projection Chamber Detector: The ArgonCube Concept

    Get PDF
    in its explanation of experimental observations. An exception is the intriguing nature of neutrinos. Particularly, neutrino flavour eigenstates do not coincide with their mass eigenstates. The flavour eigenstates are a mixture of the mass eigenstates, resulting in oscillations for non-zero neutrino masses. Neutrino mixing and oscillations have been extensively studied during the last few decades probing the parameters of the three flavour model. Nevertheless, unanswered questions remain: the possible existence of a Charge conjugation Parity symmetry (CP) violating phase in the mixing matrix and the ordering of the neutrino mass eigenstates. The Deep Underground Neutrino Experiment (DUNE) is being built to answer these questions via a detailed study of long-baseline neutrino oscillations. Like any beam experiment, DUNE requires two detectors: one near the source to characterise the unoscillated beam, and one far away to measure the oscillations. Achieving sensitivity to CP violation and mass ordering will require a data sample of unprecedented size and precision. A high-intensity beam (2MW) and massive detectors (40 kt at the far site) are required. The detectors need to provide excellent tracking and calorimetry. Liquid Argon Time Projection Chambers (LArTPCs) were chosen as Far Detectors (FDs) because they fulfil these requirements. A LArTPC component is also necessary in the Near Detector (ND) complex to bring systematic uncertainties down to the required level of a few percent. A drawback of LArTPCs is their comparatively low speed due to the finite charge drift velocity (~ 1mmμs−1). Coupled with the high beam intensity this results in event rates of 0.2 piled-up events per tonne in the ND. Such a rate poses significant challenges to traditional LArTPCs: Their 3D tracking capabilities are limited by wire charge readouts providing only 2D projections. To address this problem a pixelated charge readout was developed and successfully tested as part of this thesis. This is the first time pixels were deployed in a single-phase LArTPC, representing the single largest advancement in the sensitivity of LArTPCs—enabling true 3D tracking. A software framework was established to reconstruct cosmic muon tracks recorded with the pixels. Another problem with traditional LArTPCs is the large volume required by their monolithic design resulting in long drift distances. Consequentially, high drift voltages are required. Current LArTPCs are operating at the limit beyond which electric breakdowns readily occur. This prompted world-leading studies of breakdowns in LAr including high-speed footage, current-voltage characteristics, and optical spectrometry. A breakdown-mitigation method was developed which allows LArTPCs to operate at electric fields an order of magnitude higher than previously achieved. It was found however that a safe and prolonged operation can be achieved more effectively by keeping fields below 40 kVcm−1 at all points in the detector. Therefore, high inactive clearance volumes are required for traditional monolithic LArTPCs. Avoiding dead LAr volume intrinsically motivates a segmented TPC design with lower cathode voltages. The comprehensive conclusion of the HV and charge readout studies is the development of a novel fully modular and pixelated LArTPC concept—ArgonCube. Splitting the detector volume into independent self-contained TPCs sharing a common LAr bath reduces the required drift voltages to a manageable level and minimises inactive material. ArgonCube is incompatible with traditional PMT-based light readouts occupying large volumes. A novel cold SiPM-based light collection system utilised in the pixel demonstrator TPC enabled the development of the compact ArgonCube Light readout system (ArCLight). ArgonCube’s pixelated charge readout will exploit true 3D tracking, thereby reducing event pile-up and improving background rejection. Results of the pixel demonstration were used in simulations of the impact of pile-up for ArgonCube in the DUNE ND. The influence piled-up π0-induced EM showers have on neutrino energy reconstruction was investigated. Misidentified neutrino energy in ArgonCube is conservatively below 0.1% for more than 50% of the neutrino events, well within the DUNE error budget. The work described in this thesis has made ArgonCube the top candidate for the LAr component in the DUNE ND complex

    New Fault Detection, Mitigation and Injection Strategies for Current and Forthcoming Challenges of HW Embedded Designs

    Full text link
    Tesis por compendio[EN] Relevance of electronics towards safety of common devices has only been growing, as an ever growing stake of the functionality is assigned to them. But of course, this comes along the constant need for higher performances to fulfill such functionality requirements, while keeping power and budget low. In this scenario, industry is struggling to provide a technology which meets all the performance, power and price specifications, at the cost of an increased vulnerability to several types of known faults or the appearance of new ones. To provide a solution for the new and growing faults in the systems, designers have been using traditional techniques from safety-critical applications, which offer in general suboptimal results. In fact, modern embedded architectures offer the possibility of optimizing the dependability properties by enabling the interaction of hardware, firmware and software levels in the process. However, that point is not yet successfully achieved. Advances in every level towards that direction are much needed if flexible, robust, resilient and cost effective fault tolerance is desired. The work presented here focuses on the hardware level, with the background consideration of a potential integration into a holistic approach. The efforts in this thesis have focused several issues: (i) to introduce additional fault models as required for adequate representativity of physical effects blooming in modern manufacturing technologies, (ii) to provide tools and methods to efficiently inject both the proposed models and classical ones, (iii) to analyze the optimum method for assessing the robustness of the systems by using extensive fault injection and later correlation with higher level layers in an effort to cut development time and cost, (iv) to provide new detection methodologies to cope with challenges modeled by proposed fault models, (v) to propose mitigation strategies focused towards tackling such new threat scenarios and (vi) to devise an automated methodology for the deployment of many fault tolerance mechanisms in a systematic robust way. The outcomes of the thesis constitute a suite of tools and methods to help the designer of critical systems in his task to develop robust, validated, and on-time designs tailored to his application.[ES] La relevancia que la electrónica adquiere en la seguridad de los productos ha crecido inexorablemente, puesto que cada vez ésta copa una mayor influencia en la funcionalidad de los mismos. Pero, por supuesto, este hecho viene acompañado de una necesidad constante de mayores prestaciones para cumplir con los requerimientos funcionales, al tiempo que se mantienen los costes y el consumo en unos niveles reducidos. En este escenario, la industria está realizando esfuerzos para proveer una tecnología que cumpla con todas las especificaciones de potencia, consumo y precio, a costa de un incremento en la vulnerabilidad a múltiples tipos de fallos conocidos o la introducción de nuevos. Para ofrecer una solución a los fallos nuevos y crecientes en los sistemas, los diseñadores han recurrido a técnicas tradicionalmente asociadas a sistemas críticos para la seguridad, que ofrecen en general resultados sub-óptimos. De hecho, las arquitecturas empotradas modernas ofrecen la posibilidad de optimizar las propiedades de confiabilidad al habilitar la interacción de los niveles de hardware, firmware y software en el proceso. No obstante, ese punto no está resulto todavía. Se necesitan avances en todos los niveles en la mencionada dirección para poder alcanzar los objetivos de una tolerancia a fallos flexible, robusta, resiliente y a bajo coste. El trabajo presentado aquí se centra en el nivel de hardware, con la consideración de fondo de una potencial integración en una estrategia holística. Los esfuerzos de esta tesis se han centrado en los siguientes aspectos: (i) la introducción de modelos de fallo adicionales requeridos para la representación adecuada de efectos físicos surgentes en las tecnologías de manufactura actuales, (ii) la provisión de herramientas y métodos para la inyección eficiente de los modelos propuestos y de los clásicos, (iii) el análisis del método óptimo para estudiar la robustez de sistemas mediante el uso de inyección de fallos extensiva, y la posterior correlación con capas de más alto nivel en un esfuerzo por recortar el tiempo y coste de desarrollo, (iv) la provisión de nuevos métodos de detección para cubrir los retos planteados por los modelos de fallo propuestos, (v) la propuesta de estrategias de mitigación enfocadas hacia el tratamiento de dichos escenarios de amenaza y (vi) la introducción de una metodología automatizada de despliegue de diversos mecanismos de tolerancia a fallos de forma robusta y sistemática. Los resultados de la presente tesis constituyen un conjunto de herramientas y métodos para ayudar al diseñador de sistemas críticos en su tarea de desarrollo de diseños robustos, validados y en tiempo adaptados a su aplicación.[CA] La rellevància que l'electrònica adquireix en la seguretat dels productes ha crescut inexorablement, puix cada volta més aquesta abasta una major influència en la funcionalitat dels mateixos. Però, per descomptat, aquest fet ve acompanyat d'un constant necessitat de majors prestacions per acomplir els requeriments funcionals, mentre es mantenen els costos i consums en uns nivells reduïts. Donat aquest escenari, la indústria està fent esforços per proveir una tecnologia que complisca amb totes les especificacions de potència, consum i preu, tot a costa d'un increment en la vulnerabilitat a diversos tipus de fallades conegudes, i a la introducció de nous tipus. Per oferir una solució a les noves i creixents fallades als sistemes, els dissenyadors han recorregut a tècniques tradicionalment associades a sistemes crítics per a la seguretat, que en general oferixen resultats sub-òptims. De fet, les arquitectures empotrades modernes oferixen la possibilitat d'optimitzar les propietats de confiabilitat en habilitar la interacció dels nivells de hardware, firmware i software en el procés. Tot i això eixe punt no està resolt encara. Es necessiten avanços a tots els nivells en l'esmentada direcció per poder assolir els objectius d'una tolerància a fallades flexible, robusta, resilient i a baix cost. El treball ací presentat se centra en el nivell de hardware, amb la consideració de fons d'una potencial integració en una estratègia holística. Els esforços d'esta tesi s'han centrat en els següents aspectes: (i) la introducció de models de fallada addicionals requerits per a la representació adequada d'efectes físics que apareixen en les tecnologies de fabricació actuals, (ii) la provisió de ferramentes i mètodes per a la injecció eficient del models proposats i dels clàssics, (iii) l'anàlisi del mètode òptim per estudiar la robustesa de sistemes mitjançant l'ús d'injecció de fallades extensiva, i la posterior correlació amb capes de més alt nivell en un esforç per retallar el temps i cost de desenvolupament, (iv) la provisió de nous mètodes de detecció per cobrir els reptes plantejats pels models de fallades proposats, (v) la proposta d'estratègies de mitigació enfocades cap al tractament dels esmentats escenaris d'amenaça i (vi) la introducció d'una metodologia automatitzada de desplegament de diversos mecanismes de tolerància a fallades de forma robusta i sistemàtica. Els resultats de la present tesi constitueixen un conjunt de ferramentes i mètodes per ajudar el dissenyador de sistemes crítics en la seua tasca de desenvolupament de dissenys robustos, validats i a temps adaptats a la seua aplicació.Espinosa García, J. (2016). New Fault Detection, Mitigation and Injection Strategies for Current and Forthcoming Challenges of HW Embedded Designs [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/73146TESISCompendi

    On the direct detection of 229mTh

    Get PDF
    Measurements are described that have led to the direct detection of the isomeric first excited state of the thorium-229 nucleus
    corecore