17 research outputs found

    A NEW SYSTEM TO INCREASE THE LIFE EXPECTANCY OF OPTICAL DISCS. PERFORMANCE ASSESSMENT WITH A DEDICATED EXPERIMENTAL SETUP.

    Get PDF
    Negli ultimi decenni si \ue8 registrata una crescita esponenziale per quel che riguarda l'uso di supporti digitali per l\u2019archiviazione dei dati informativi. Tuttavia, l\u2019aspettativa di vita di questi supporti \ue8 inadeguata rispetto alle necessit\ue0 delle istituzioni che si occupano di archiviare e preservare, ad esempio, il patrimonio artistico, storico, e culturale. Sulla base di questa problematica, pi\uf9 volte sollevata dall\u2019UNESCO [1,35], proponiamo un approccio innovativo che la degradazione chimico-fisica dei dischi ottici in modo da aumentarne l\u2019aspettativa di vita. Gli obiettivi di questa tesi sono quindi la progettazione e la messa in atto di una nuova strategia intelligente, volta ad aumentare l'aspettativa di vita dei dischi ottici. Diversi apparati sperimentali sono stati sviluppati al fine di studiare i meccanismi della degradazione chimico-fisica dei dischi tramite test di invecchiamento accelerato che simulano le condizioni operative di utilizzo del disco. Nei dischi sono state identificate aree critiche, nelle quali la degradazione cresce statisticamente pi\uf9 velocemente della media, e delle aree sicure, dove la degradazione cresce invece pi\uf9 lentamente. Due apparati sperimentali sono stati costruiti. Il primo \ue8 una camera climatica in grado di indurre invecchiamento artificiale sui dischi. Il secondo \ue8 un sistema robotico in grado di rilevare la quantit\ue0 di errori in ogni blocco di dati, prima della fase di correzione degli errori effettuata dal codice "Reed Solomon" (prima e dopo la fase di invecchiamento artificiale, senza polveri, e in un ambiente con controllo di temperatura e umidit\ue0). L'analisi dei dati cos\uec raccolti ha permesso di identificare le gi\ue0 citate aree fisiche in cui i blocchi di dati hanno un numero di errori che si avvicina o addirittura supera la capacit\ue0 di correzione dei dati propria del codice Reed Solomon standard. I risultati di queste analisi hanno portato allo sviluppo di un "Codice Reed Solomon Adattivo" (codice A-RS), che consente di proteggere le informazioni memorizzate all'interno delle aree critiche. Infatti, il codice A-RS utilizza un algoritmo di redistribuzione che viene applicato ai simboli di parit\ue0. La ridistribuzione della ridondanza \ue8 stata calcolata tramite una funzione di degrado, a sua volta ottenuta fittando gli errori rilevati nella fase sperimentale. L'algoritmo di redistribuzione sposta i simboli di parit\ue0 dai blocchi di dati nelle aree sicure ai blocchi di dati nelle aree critiche. Inoltre, \ue8 interessante sottolineare come questo processo non diminuisca la capacit\ue0 di memoria dei Digital Versatile Discs (DVDs) e dei Blu-ray Discs (BDs). Questa strategia consente quindi di evitare la dismissione anticipata dei dischi ottici, dovuta alle possibili perdite di dati.The past decade has witnessed an exponential growth in the use of digital supports for big data archiving. However, the expected lifespan of these supports is inadequate with respect to the actual needs of heritage institutions. Stemming from the issues raised by UNESCO [1, 2], we address the problem of alleviating the effects of aging on optical discs. To achieve this purpose, we propose a novel logical approach that is able to conteract the physical and chemical degradation of different types of optical discs, increasing their life expectancy. In other words, the objectives of this thesis are the design and the implementation of a new intelligent strategy aimed at increasing the life expectancy of optical discs. An experimental setup has been developed in order to investigate the physical and chemical degradation processes of discs, by means of accelerated aging tests that simulated the operational disc-use conditions. Critical areas are identified where disc degradation is statistically faster than average, while in safe areas the degradation is relatively slow. To collect the needed data, two experimental devices have been built. The first is a climatic chamber, which is able to induce artificial disc aging. The second is a robotic device, which is able to detect the amount of errors in each data block prior to the \u201cReed Solomon\u201d error correction stage (before and after the accelerate aging stage, without dust and in an environment with controlled temperature and humidity). The analysis of the data allows to identify the aforementioned physical areas where data blocks have a number of errors that approaches or exceeds the data correction capability of the standard Reed Solomon code. The results of these analyses have led to develop an \u201cAdaptive Reed Solomon Code\u201d (A-RS code) that allows to protect the information stored within the critical areas. The A-RS code uses a redistribution algorithm that is applied to parity symbols. It is calculated from the fitting of the experimental errors obtained through the \u201cdegradation function\u201d. The redistribution algorithm shifts a certain number of parity symbols from Data Blocks in safe areas to Data Blocks in critical areas. Interestingly, these processes do not diminish the memory capability of Digital Versatile discs (DVDs) and of Blu-Ray discs (BDs). This strategy therefore avoids the early dismission of optical discs, due to possible losses of even minimal parts of the recorded information

    Proceedings of the Second International Mobile Satellite Conference (IMSC 1990)

    Get PDF
    Presented here are the proceedings of the Second International Mobile Satellite Conference (IMSC), held June 17-20, 1990 in Ottawa, Canada. Topics covered include future mobile satellite communications concepts, aeronautical applications, modulation and coding, propagation and experimental systems, mobile terminal equipment, network architecture and control, regulatory and policy considerations, vehicle antennas, and speech compression

    Hardware realization of discrete wavelet transform cauchy Reed Solomon minimal instruction set computer architecture for wireless visual sensor networks

    Get PDF
    Large amount of image data transmitting across the Wireless Visual Sensor Networks (WVSNs) increases the data transmission rate thus increases the power transmission. This would inevitably decreases the operating lifespan of the sensor nodes and affecting the overall operation of WVSNs. Limiting power consumption to prolong battery lifespan is one of the most important goals in WVSNs. To achieve this goal, this thesis presents a novel low complexity Discrete Wavelet Transform (DWT) Cauchy Reed Solomon (CRS) Minimal Instruction Set Computer (MISC) architecture that performs data compression and data encoding (encryption) in a single architecture. There are four different programme instructions were developed to programme the MISC processor, which are Subtract and Branch if Negative (SBN), Galois Field Multiplier (GF MULT), XOR and 11TO8 instructions. With the use of these programme instructions, the developed DWT CRS MISC were programmed to perform DWT image compression to reduce the image size and then encode the DWT coefficients with CRS code to ensure data security and reliability. Both compression and CRS encoding were performed by a single architecture rather than in two separate modules which require a lot of hardware resources (logic slices). By reducing the number of logic slices, the power consumption can be subsequently reduced. Results show that the proposed new DWT CRS MISC architecture implementation requires 142 Slices (Xilinx Virtex-II), 129 slices (Xilinx Spartan-3E), 144 Slices (Xilinx Spartan-3L) and 66 Slices (Xilinx Spartan-6). The developed DWT CRS MISC architecture has lower hardware complexity as compared to other existing systems, such as Crypto-Processor in Xilinx Spartan-6 (4828 Slices), Low-Density Parity-Check in Xilinx Virtex-II (870 slices) and ECBC in Xilinx Spartan-3E (1691 Slices). With the use of RC10 development board, the developed DWT CRS MISC architecture can be implemented onto the Xilinx Spartan-3L FPGA to simulate an actual visual sensor node. This is to verify the feasibility of developing a joint compression, encryption and error correction processing framework in WVSNs

    Exploiting task-based programming models for resilience

    Get PDF
    Hardware errors become more common as silicon technologies shrink and become more vulnerable, especially in memory cells, which are the most exposed to errors. Permanent and intermittent faults are caused by manufacturing variability and circuits ageing. While these can be mitigated once they are identified, their continuous rate of appearance throughout the lifetime of memory devices will always cause unexpected errors. In addition, transient faults are caused by effects such as radiation or small voltage/frequency margins, and there is no efficient way to shield against these events. Other constraints related to the diminishing sizes of transistors, such as power consumption and memory latency have caused the microprocessor industry to turn to increasingly complex processor architectures. To solve the difficulties arising from programming such architectures, programming models have emerged that rely on runtime systems. These systems form a new intermediate layer on the hardware-software abstraction stack, that performs tasks such as distributing work across computing resources: processor cores, accelerators, etc. These runtime systems dispose of a lot of information, both from the hardware and the applications, and offer thus many possibilities for optimisations. This thesis proposes solutions to the increasing fault rates in memory, across multiple resilience disciplines, from algorithm-based fault tolerance to hardware error correcting codes, through OS reliability strategies. These solutions rely for their efficiency on the opportunities presented by runtime systems. The first contribution of this thesis is an algorithmic-based resilience technique, allowing to tolerate detected errors in memory. This technique allows to recover data that is lost by performing computations that rely on simple redundancy relations identified in the program. The recovery is demonstrated for a family of iterative solvers, the Krylov subspace methods, and evaluated for the conjugate gradient solver. The runtime can transparently overlap the recovery with the computations of the algorithm, which allows to mask the already low overheads of this technique. The second part of this thesis proposes a metric to characterise the impact of faults in memory, which outperforms state-of-the-art metrics in precision and assurances on the error rate. This metric reveals a key insight into data that is not relevant to the program, and we propose an OS-level strategy to ignore errors in such data, by delaying the reporting of detected errors. This allows to reduce failure rates of running programs, by ignoring errors that have no impact. The architectural-level contribution of this thesis is a dynamically adaptable Error Correcting Code (ECC) scheme, that can increase protection of memory regions where the impact of errors is highest. A runtime methodology is presented to estimate the fault rate at runtime using our metric, through performance monitoring tools of current commodity processors. Guiding the dynamic ECC scheme online using the methodology's vulnerability estimates allows to decrease error rates of programs at a fraction of the redundancy cost required for a uniformly stronger ECC. This provides a useful and wide range of trade-offs between redundancy and error rates. The work presented in this thesis demonstrates that runtime systems allow to make the most of redundancy stored in memory, to help tackle increasing error rates in DRAM. This exploited redundancy can be an inherent part of algorithms that allows to tolerate higher fault rates, or in the form of dead data stored in memory. Redundancy can also be added to a program, in the form of ECC. In all cases, the runtime allows to decrease failure rates efficiently, by diminishing recovery costs, identifying redundant data, or targeting critical data. It is thus a very valuable tool for the future computing systems, as it can perform optimisations across different layers of abstractions.Los errores en memoria se vuelven m谩s comunes a medida que las tecnolog铆as de silicio reducen su tama帽o. La variabilidad de fabricaci贸n y el envejecimiento de los circuitos causan fallos permanentes e intermitentes. Aunque se pueden mitigar una vez identificados, su continua tasa de aparici贸n siempre causa errores inesperados. Adem谩s, la memoria tambi茅n sufre de fallos transitorios contra los cuales no se puede proteger eficientemente. Estos fallos est谩n causados por efectos como la radiaci贸n o los reducidos m谩rgenes de voltaje y frecuencia. Otras restricciones coet谩neas, como el consumo de energ铆a y la latencia de la memoria, obligaron a las arquitecturas de computadores a volverse cada vez m谩s complejas. Para programar tales procesadores, se desarrollaron modelos de programaci贸n basados en entornos de ejecuci贸n. Estos sistemas forman una nueva abstracci贸n entre hardware y software, realizando tareas como la distribuci贸n del trabajo entre recursos inform谩ticos: n煤cleos de procesadores, aceleradores, etc. Estos entornos de ejecuci贸n disponen de mucha informaci贸n tanto sobre el hardware como sobre las aplicaciones, y ofrecen as铆 muchas posibilidades de optimizaci贸n. Esta tesis propone soluciones a los fallos en memoria entre m煤ltiples disciplinas de resiliencia, desde la tolerancia a fallos basada en algoritmos, hasta los c贸digos de correcci贸n de errores en hardware, incluyendo estrategias de resiliencia del sistema operativo. La eficiencia de estas soluciones depende de las oportunidades que presentan los entornos de ejecuci贸n. La primera contribuci贸n de esta tesis es una t茅cnica a nivel algor铆tmico que permite corregir fallos encontrados mientras el programa su ejecuta. Para corregir fallos se han identificado redundancias simples en los datos del programa para toda una clase de algoritmos, los m茅todos del subespacio de Krylov (gradiente conjugado, GMRES, etc). La estrategia de recuperaci贸n de datos desarrollada permite corregir errores sin tener que reinicializar el algoritmo, y aprovecha el modelo de programaci贸n para superponer las computaciones del algoritmo y de la recuperaci贸n de datos. La segunda parte de esta tesis propone una m茅trica para caracterizar el impacto de los fallos en la memoria. Esta m茅trica supera en precisi贸n a las m茅tricas de vanguardia y permite identificar datos que son menos relevantes para el programa. Se propone una estrategia a nivel del sistema operativo retrasando la notificaci贸n de los errores detectados, que permite ignorar fallos en estos datos y reducir la tasa de fracaso del programa. Por 煤ltimo, la contribuci贸n a nivel arquitect贸nico de esta tesis es un esquema de C贸digo de Correcci贸n de Errores (ECC por sus siglas en ingl茅s) adaptable din谩micamente. Este esquema puede aumentar la protecci贸n de las regiones de memoria donde el impacto de los errores es mayor. Se presenta una metodolog铆a para estimar el riesgo de fallo en tiempo de ejecuci贸n utilizando nuestra m茅trica, a trav茅s de las herramientas de monitorizaci贸n del rendimiento disponibles en los procesadores actuales. El esquema de ECC guiado din谩micamente con estas estimaciones de vulnerabilidad permite disminuir la tasa de fracaso de los programas a una fracci贸n del coste de redundancia requerido para un ECC uniformemente m谩s fuerte. El trabajo presentado en esta tesis demuestra que los entornos de ejecuci贸n permiten aprovechar al m谩ximo la redundancia contenida en la memoria, para contener el aumento de los errores en ella. Esta redundancia explotada puede ser una parte inherente de los algoritmos que permite tolerar m谩s fallos, en forma de datos inutilizados almacenados en la memoria, o agregada a la memoria de un programa en forma de ECC. En todos los casos, el entorno de ejecuci贸n permite disminuir los efectos de los fallos de manera eficiente, disminuyendo los costes de recuperaci贸n, identificando datos redundantes, o focalizando esfuerzos de protecci贸n en los datos cr铆ticos.Postprint (published version

    The 1992 4th NASA SERC Symposium on VLSI Design

    Get PDF
    Papers from the fourth annual NASA Symposium on VLSI Design, co-sponsored by the IEEE, are presented. Each year this symposium is organized by the NASA Space Engineering Research Center (SERC) at the University of Idaho and is held in conjunction with a quarterly meeting of the NASA Data System Technology Working Group (DSTWG). One task of the DSTWG is to develop new electronic technologies that will meet next generation electronic data system needs. The symposium provides insights into developments in VLSI and digital systems which can be used to increase data systems performance. The NASA SERC is proud to offer, at its fourth symposium on VLSI design, presentations by an outstanding set of individuals from national laboratories, the electronics industry, and universities. These speakers share insights into next generation advances that will serve as a basis for future VLSI design

    Hardware realization of discrete wavelet transform cauchy Reed Solomon minimal instruction set computer architecture for wireless visual sensor networks

    Get PDF
    Large amount of image data transmitting across the Wireless Visual Sensor Networks (WVSNs) increases the data transmission rate thus increases the power transmission. This would inevitably decreases the operating lifespan of the sensor nodes and affecting the overall operation of WVSNs. Limiting power consumption to prolong battery lifespan is one of the most important goals in WVSNs. To achieve this goal, this thesis presents a novel low complexity Discrete Wavelet Transform (DWT) Cauchy Reed Solomon (CRS) Minimal Instruction Set Computer (MISC) architecture that performs data compression and data encoding (encryption) in a single architecture. There are four different programme instructions were developed to programme the MISC processor, which are Subtract and Branch if Negative (SBN), Galois Field Multiplier (GF MULT), XOR and 11TO8 instructions. With the use of these programme instructions, the developed DWT CRS MISC were programmed to perform DWT image compression to reduce the image size and then encode the DWT coefficients with CRS code to ensure data security and reliability. Both compression and CRS encoding were performed by a single architecture rather than in two separate modules which require a lot of hardware resources (logic slices). By reducing the number of logic slices, the power consumption can be subsequently reduced. Results show that the proposed new DWT CRS MISC architecture implementation requires 142 Slices (Xilinx Virtex-II), 129 slices (Xilinx Spartan-3E), 144 Slices (Xilinx Spartan-3L) and 66 Slices (Xilinx Spartan-6). The developed DWT CRS MISC architecture has lower hardware complexity as compared to other existing systems, such as Crypto-Processor in Xilinx Spartan-6 (4828 Slices), Low-Density Parity-Check in Xilinx Virtex-II (870 slices) and ECBC in Xilinx Spartan-3E (1691 Slices). With the use of RC10 development board, the developed DWT CRS MISC architecture can be implemented onto the Xilinx Spartan-3L FPGA to simulate an actual visual sensor node. This is to verify the feasibility of developing a joint compression, encryption and error correction processing framework in WVSNs

    Exploiting task-based programming models for resilience

    Get PDF
    Hardware errors become more common as silicon technologies shrink and become more vulnerable, especially in memory cells, which are the most exposed to errors. Permanent and intermittent faults are caused by manufacturing variability and circuits ageing. While these can be mitigated once they are identified, their continuous rate of appearance throughout the lifetime of memory devices will always cause unexpected errors. In addition, transient faults are caused by effects such as radiation or small voltage/frequency margins, and there is no efficient way to shield against these events. Other constraints related to the diminishing sizes of transistors, such as power consumption and memory latency have caused the microprocessor industry to turn to increasingly complex processor architectures. To solve the difficulties arising from programming such architectures, programming models have emerged that rely on runtime systems. These systems form a new intermediate layer on the hardware-software abstraction stack, that performs tasks such as distributing work across computing resources: processor cores, accelerators, etc. These runtime systems dispose of a lot of information, both from the hardware and the applications, and offer thus many possibilities for optimisations. This thesis proposes solutions to the increasing fault rates in memory, across multiple resilience disciplines, from algorithm-based fault tolerance to hardware error correcting codes, through OS reliability strategies. These solutions rely for their efficiency on the opportunities presented by runtime systems. The first contribution of this thesis is an algorithmic-based resilience technique, allowing to tolerate detected errors in memory. This technique allows to recover data that is lost by performing computations that rely on simple redundancy relations identified in the program. The recovery is demonstrated for a family of iterative solvers, the Krylov subspace methods, and evaluated for the conjugate gradient solver. The runtime can transparently overlap the recovery with the computations of the algorithm, which allows to mask the already low overheads of this technique. The second part of this thesis proposes a metric to characterise the impact of faults in memory, which outperforms state-of-the-art metrics in precision and assurances on the error rate. This metric reveals a key insight into data that is not relevant to the program, and we propose an OS-level strategy to ignore errors in such data, by delaying the reporting of detected errors. This allows to reduce failure rates of running programs, by ignoring errors that have no impact. The architectural-level contribution of this thesis is a dynamically adaptable Error Correcting Code (ECC) scheme, that can increase protection of memory regions where the impact of errors is highest. A runtime methodology is presented to estimate the fault rate at runtime using our metric, through performance monitoring tools of current commodity processors. Guiding the dynamic ECC scheme online using the methodology's vulnerability estimates allows to decrease error rates of programs at a fraction of the redundancy cost required for a uniformly stronger ECC. This provides a useful and wide range of trade-offs between redundancy and error rates. The work presented in this thesis demonstrates that runtime systems allow to make the most of redundancy stored in memory, to help tackle increasing error rates in DRAM. This exploited redundancy can be an inherent part of algorithms that allows to tolerate higher fault rates, or in the form of dead data stored in memory. Redundancy can also be added to a program, in the form of ECC. In all cases, the runtime allows to decrease failure rates efficiently, by diminishing recovery costs, identifying redundant data, or targeting critical data. It is thus a very valuable tool for the future computing systems, as it can perform optimisations across different layers of abstractions.Los errores en memoria se vuelven m谩s comunes a medida que las tecnolog铆as de silicio reducen su tama帽o. La variabilidad de fabricaci贸n y el envejecimiento de los circuitos causan fallos permanentes e intermitentes. Aunque se pueden mitigar una vez identificados, su continua tasa de aparici贸n siempre causa errores inesperados. Adem谩s, la memoria tambi茅n sufre de fallos transitorios contra los cuales no se puede proteger eficientemente. Estos fallos est谩n causados por efectos como la radiaci贸n o los reducidos m谩rgenes de voltaje y frecuencia. Otras restricciones coet谩neas, como el consumo de energ铆a y la latencia de la memoria, obligaron a las arquitecturas de computadores a volverse cada vez m谩s complejas. Para programar tales procesadores, se desarrollaron modelos de programaci贸n basados en entornos de ejecuci贸n. Estos sistemas forman una nueva abstracci贸n entre hardware y software, realizando tareas como la distribuci贸n del trabajo entre recursos inform谩ticos: n煤cleos de procesadores, aceleradores, etc. Estos entornos de ejecuci贸n disponen de mucha informaci贸n tanto sobre el hardware como sobre las aplicaciones, y ofrecen as铆 muchas posibilidades de optimizaci贸n. Esta tesis propone soluciones a los fallos en memoria entre m煤ltiples disciplinas de resiliencia, desde la tolerancia a fallos basada en algoritmos, hasta los c贸digos de correcci贸n de errores en hardware, incluyendo estrategias de resiliencia del sistema operativo. La eficiencia de estas soluciones depende de las oportunidades que presentan los entornos de ejecuci贸n. La primera contribuci贸n de esta tesis es una t茅cnica a nivel algor铆tmico que permite corregir fallos encontrados mientras el programa su ejecuta. Para corregir fallos se han identificado redundancias simples en los datos del programa para toda una clase de algoritmos, los m茅todos del subespacio de Krylov (gradiente conjugado, GMRES, etc). La estrategia de recuperaci贸n de datos desarrollada permite corregir errores sin tener que reinicializar el algoritmo, y aprovecha el modelo de programaci贸n para superponer las computaciones del algoritmo y de la recuperaci贸n de datos. La segunda parte de esta tesis propone una m茅trica para caracterizar el impacto de los fallos en la memoria. Esta m茅trica supera en precisi贸n a las m茅tricas de vanguardia y permite identificar datos que son menos relevantes para el programa. Se propone una estrategia a nivel del sistema operativo retrasando la notificaci贸n de los errores detectados, que permite ignorar fallos en estos datos y reducir la tasa de fracaso del programa. Por 煤ltimo, la contribuci贸n a nivel arquitect贸nico de esta tesis es un esquema de C贸digo de Correcci贸n de Errores (ECC por sus siglas en ingl茅s) adaptable din谩micamente. Este esquema puede aumentar la protecci贸n de las regiones de memoria donde el impacto de los errores es mayor. Se presenta una metodolog铆a para estimar el riesgo de fallo en tiempo de ejecuci贸n utilizando nuestra m茅trica, a trav茅s de las herramientas de monitorizaci贸n del rendimiento disponibles en los procesadores actuales. El esquema de ECC guiado din谩micamente con estas estimaciones de vulnerabilidad permite disminuir la tasa de fracaso de los programas a una fracci贸n del coste de redundancia requerido para un ECC uniformemente m谩s fuerte. El trabajo presentado en esta tesis demuestra que los entornos de ejecuci贸n permiten aprovechar al m谩ximo la redundancia contenida en la memoria, para contener el aumento de los errores en ella. Esta redundancia explotada puede ser una parte inherente de los algoritmos que permite tolerar m谩s fallos, en forma de datos inutilizados almacenados en la memoria, o agregada a la memoria de un programa en forma de ECC. En todos los casos, el entorno de ejecuci贸n permite disminuir los efectos de los fallos de manera eficiente, disminuyendo los costes de recuperaci贸n, identificando datos redundantes, o focalizando esfuerzos de protecci贸n en los datos cr铆ticos

    Energy-efficient design and implementation of turbo codes for wireless sensor network

    No full text
    The objective of this thesis is to apply near Shannon limit Error-Correcting Codes (ECCs), particularly the turbo-like codes, to energy-constrained wireless devices, for the purpose of extending their lifetime. Conventionally, sophisticated ECCs are applied to applications, such as mobile telephone networks or satellite television networks, to facilitate long range and high throughput wireless communication. For low power applications, such as Wireless Sensor Networks (WSNs), these ECCs were considered due to their high decoder complexities. In particular, the energy efficiency of the sensor nodes in WSNs is one of the most important factors in their design. The processing energy consumption required by high complexity ECCs decoders is a significant drawback, which impacts upon the overall energy consumption of the system. However, as Integrated Circuit (IC) processing technology is scaled down, the processing energy consumed by hardware resources reduces exponentially. As a result, near Shannon limit ECCs have recently begun to be considered for use in WSNs to reduce the transmission energy consumption [1,2]. However, to ensure that the transmission energy consumption reduction granted by the employed ECC makes a positive improvement on the overall energy efficiency of the system, the processing energy consumption must still be carefully considered.The main subject of this thesis is to optimise the design of turbo codes at both an algorithmic and a hardware implementation level for WSN scenarios. The communication requirements of the target WSN applications, such as communication distance, channel throughput, network scale, transmission frequency, network topology, etc, are investigated. Those requirements are important factors for designing a channel coding system. Especially when energy resources are limited, the trade-off between the requirements placed on different parameters must be carefully considered, in order to minimise the overall energy consumption. Moreover, based on this investigation, the advantages of employing near Shannon limit ECCs in WSNs are discussed. Low complexity and energy-efficient hardware implementations of the ECC decoders are essential for the target applications

    DIGITAL WATERMARKING FOR COMPACT DISCS AND THEIR EFFECT ON THE ERROR CORRECTION SYSTEM

    Get PDF
    A new technique, based on current compact disc technology, to image the transparent surface of a compact disc, or additionally the reflective information layer, has been designed, implemented and evaluated. This technique (image capture technique) has been tested and successfully applied to the detection of mechanically introduced compact disc watermarks and biometrical information with a resolution of 1.6um x l4um. Software has been written which, when used with the image capture technique, recognises a compact disc based on its error distribution. The software detects digital watermarks which cause either laser signal distortions or decoding error events. Watermarks serve as secure media identifiers. The complete channel coding of a Compact Disc Audio system including EFM modulation, error-correction and interleaving have been implemented in software. The performance of the error correction system of the compact disc has been assessed using this simulation model. An embedded data channel holding watermark data has been investigated. The covert channel is implemented by means of the error-correction ability of the Compact Disc system and was realised by aforementioned techniques like engraving the reflective layer or the polysubstrate layer. Computer simulations show that watermarking schemes, composed of regularly distributed single errors, impose a minimum effect on the error correction system. Error rates increase by a factor of ten if regular single-symbol errors per frame are introduced - all other patterns further increase the overall error rates. Results show that background signal noise has to be reduced by a factor of 60% to account for the additional burden of this optimal watermark pattern. Two decoding strategies, usually employed in modern CD decoders, have been examined. Simulations take emulated bursty background noise as it appears in user-handled discs into account. Variations in output error rates, depending on the decoder and the type of background noise became apparant. At low error rates {r < 0.003) the output symbol error rate for a bursty background differs by 20% depending on the decoder. Differences between a typical burst error distribution caused by user-handling and a non-burst error distribution has been found to be approximately 1% with the higher performing decoder. Simulation results show that the drop of the error-correction rates due to the presence of a watermark pattern quantitatively depends on the characteristic type of the background noise. A four times smaller change to the overall error rate was observed when adding a regular watermark pattern to a characteristic background noise, as caused by user-handling, compared to a non-bursty background

    Decryption Failure Attacks on Post-Quantum Cryptography

    Get PDF
    This dissertation discusses mainly new cryptanalytical results related to issues of securely implementing the next generation of asymmetric cryptography, or Public-Key Cryptography (PKC).PKC, as it has been deployed until today, depends heavily on the integer factorization and the discrete logarithm problems.Unfortunately, it has been well-known since the mid-90s, that these mathematical problems can be solved due to Peter Shor's algorithm for quantum computers, which achieves the answers in polynomial time.The recently accelerated pace of R&D towards quantum computers, eventually of sufficient size and power to threaten cryptography, has led the crypto research community towards a major shift of focus.A project towards standardization of Post-quantum Cryptography (PQC) was launched by the US-based standardization organization, NIST. PQC is the name given to algorithms designed for running on classical hardware/software whilst being resistant to attacks from quantum computers.PQC is well suited for replacing the current asymmetric schemes.A primary motivation for the project is to guide publicly available research toward the singular goal of finding weaknesses in the proposed next generation of PKC.For public key encryption (PKE) or digital signature (DS) schemes to be considered secure they must be shown to rely heavily on well-known mathematical problems with theoretical proofs of security under established models, such as indistinguishability under chosen ciphertext attack (IND-CCA).Also, they must withstand serious attack attempts by well-renowned cryptographers both concerning theoretical security and the actual software/hardware instantiations.It is well-known that security models, such as IND-CCA, are not designed to capture the intricacies of inner-state leakages.Such leakages are named side-channels, which is currently a major topic of interest in the NIST PQC project.This dissertation focuses on two things, in general:1) how does the low but non-zero probability of decryption failures affect the cryptanalysis of these new PQC candidates?And 2) how might side-channel vulnerabilities inadvertently be introduced when going from theory to the practice of software/hardware implementations?Of main concern are PQC algorithms based on lattice theory and coding theory.The primary contributions are the discovery of novel decryption failure side-channel attacks, improvements on existing attacks, an alternative implementation to a part of a PQC scheme, and some more theoretical cryptanalytical results
    corecore