1,128 research outputs found

    Universal homophonic coding

    Get PDF
    Redundancy in plaintext is a fertile source of attack in any encryption system. Compression before encryption reduces the redundancy in the plaintext, but this does not make a cipher more secure. The cipher text is still susceptible to known-plaintext and chosen-plaintext attacks. The aim of homophonic coding is to convert a plaintext source into a random sequence by randomly mapping each source symbol into one of a set of homophones. Each homophone is then encoded by a source coder after which it can be encrypted with a cryptographic system. The security of homophonic coding falls into the class of unconditionally secure ciphers. The main advantage of homophonic coding over pure source coding is that it provides security both against known-plaintext and chosen-plaintext attacks, whereas source coding merely protects against a ciphertext-only attack. The aim of this dissertation is to investigate the implementation of an adaptive homophonic coder based on an arithmetic coder. This type of homophonic coding is termed universal, as it is not dependent on the source statistics.Computer ScienceM.Sc. (Computer Science

    Low-Complexity Cryptographic Hash Functions

    Get PDF
    Cryptographic hash functions are efficiently computable functions that shrink a long input into a shorter output while achieving some of the useful security properties of a random function. The most common type of such hash functions is collision resistant hash functions (CRH), which prevent an efficient attacker from finding a pair of inputs on which the function has the same output

    How to Garble Arithmetic Circuits

    Get PDF
    Yao\u27s garbled circuit construction transforms a boolean circuit C:{0,1}n→{0,1}mC:\{0,1\}^n\to\{0,1\}^m into a ``garbled circuit\u27\u27 C^\hat{C} along with nn pairs of kk-bit keys, one for each input bit, such that C^\hat{C} together with the nn keys corresponding to an input xx reveal C(x)C(x) and no additional information about xx. The garbled circuit construction is a central tool for constant-round secure computation and has several other applications. Motivated by these applications, we suggest an efficient arithmetic variant of Yao\u27s original construction. Our construction transforms an arithmetic circuit C:Zn→ZmC : \mathbb{Z}^n\to\mathbb{Z}^m over integers from a bounded (but possibly exponential) range into a garbled circuit C^\hat{C} along with nn affine functions Li:Z→ZkL_i : \mathbb{Z}\to \mathbb{Z}^k such that C^\hat{C} together with the nn integer vectors Li(xi)L_i(x_i) reveal C(x)C(x) and no additional information about xx. The security of our construction relies on the intractability of the learning with errors (LWE) problem

    Strategies towards high performance (high-resolution/linearity) time-to-digital converters on field-programmable gate arrays

    Get PDF
    Time-correlated single-photon counting (TCSPC) technology has become popular in scientific research and industrial applications, such as high-energy physics, bio-sensing, non-invasion health monitoring, and 3D imaging. Because of the increasing demand for high-precision time measurements, time-to-digital converters (TDCs) have attracted attention since the 1970s. As a fully digital solution, TDCs are portable and have great potential for multichannel applications compared to bulky and expensive time-to-amplitude converters (TACs). A TDC can be implemented in ASIC and FPGA devices. Due to the low cost, flexibility, and short development cycle, FPGA-TDCs have become promising. Starting with a literature review, three original FPGA-TDCs with outstanding performance are introduced. The first design is the first efficient wave union (WU) based TDC implemented in Xilinx UltraScale (20 nm) FPGAs with a bubble-free sub-TDL structure. Combining with other existing methods, the resolution is further enhanced to 1.23 ps. The second TDC has been designed for LiDAR applications, especially in driver-less vehicles. Using the proposed new calibration method, the resolution is adjustable (50, 80, and 100 ps), and the linearity is exceptionally high (INL pk-pk and INL pk-pk are lower than 0.05 LSB). Meanwhile, a software tool has been open-sourced with a graphic user interface (GUI) to predict TDCs’ performance. In the third TDC, an onboard automatic calibration (AC) function has been realized by exploiting Xilinx ZYNQ SoC architectures. The test results show the robustness of the proposed method. Without the manual calibration, the AC function enables FPGA-TDCs to be applied in commercial products where mass production is required.Time-correlated single-photon counting (TCSPC) technology has become popular in scientific research and industrial applications, such as high-energy physics, bio-sensing, non-invasion health monitoring, and 3D imaging. Because of the increasing demand for high-precision time measurements, time-to-digital converters (TDCs) have attracted attention since the 1970s. As a fully digital solution, TDCs are portable and have great potential for multichannel applications compared to bulky and expensive time-to-amplitude converters (TACs). A TDC can be implemented in ASIC and FPGA devices. Due to the low cost, flexibility, and short development cycle, FPGA-TDCs have become promising. Starting with a literature review, three original FPGA-TDCs with outstanding performance are introduced. The first design is the first efficient wave union (WU) based TDC implemented in Xilinx UltraScale (20 nm) FPGAs with a bubble-free sub-TDL structure. Combining with other existing methods, the resolution is further enhanced to 1.23 ps. The second TDC has been designed for LiDAR applications, especially in driver-less vehicles. Using the proposed new calibration method, the resolution is adjustable (50, 80, and 100 ps), and the linearity is exceptionally high (INL pk-pk and INL pk-pk are lower than 0.05 LSB). Meanwhile, a software tool has been open-sourced with a graphic user interface (GUI) to predict TDCs’ performance. In the third TDC, an onboard automatic calibration (AC) function has been realized by exploiting Xilinx ZYNQ SoC architectures. The test results show the robustness of the proposed method. Without the manual calibration, the AC function enables FPGA-TDCs to be applied in commercial products where mass production is required

    DeSyRe: on-Demand System Reliability

    No full text
    The DeSyRe project builds on-demand adaptive and reliable Systems-on-Chips (SoCs). As fabrication technology scales down, chips are becoming less reliable, thereby incurring increased power and performance costs for fault tolerance. To make matters worse, power density is becoming a significant limiting factor in SoC design, in general. In the face of such changes in the technological landscape, current solutions for fault tolerance are expected to introduce excessive overheads in future systems. Moreover, attempting to design and manufacture a totally defect and fault-free system, would impact heavily, even prohibitively, the design, manufacturing, and testing costs, as well as the system performance and power consumption. In this context, DeSyRe delivers a new generation of systems that are reliable by design at well-balanced power, performance, and design costs. In our attempt to reduce the overheads of fault-tolerance, only a small fraction of the chip is built to be fault-free. This fault-free part is then employed to manage the remaining fault-prone resources of the SoC. The DeSyRe framework is applied to two medical systems with high safety requirements (measured using the IEC 61508 functional safety standard) and tight power and performance constraints

    AIS for Misbehavior Detection in Wireless Sensor Networks: Performance and Design Principles

    Full text link
    A sensor network is a collection of wireless devices that are able to monitor physical or environmental conditions. These devices (nodes) are expected to operate autonomously, be battery powered and have very limited computational capabilities. This makes the task of protecting a sensor network against misbehavior or possible malfunction a challenging problem. In this document we discuss performance of Artificial immune systems (AIS) when used as the mechanism for detecting misbehavior. We show that (i) mechanism of the AIS have to be carefully applied in order to avoid security weaknesses, (ii) the choice of genes and their interaction have a profound influence on the performance of the AIS, (iii) randomly created detectors do not comply with limitations imposed by communications protocols and (iv) the data traffic pattern seems not to impact significantly the overall performance. We identified a specific MAC layer based gene that showed to be especially useful for detection; genes measure a network's performance from a node's viewpoint. Furthermore, we identified an interesting complementarity property of genes; this property exploits the local nature of sensor networks and moves the burden of excessive communication from normally behaving nodes to misbehaving nodes. These results have a direct impact on the design of AIS for sensor networks and on engineering of sensor networks.Comment: 16 pages, 20 figures, a full version of our IEEE CEC 2007 pape

    Combined Industry, Space and Earth Science Data Compression Workshop

    Get PDF
    The sixth annual Space and Earth Science Data Compression Workshop and the third annual Data Compression Industry Workshop were held as a single combined workshop. The workshop was held April 4, 1996 in Snowbird, Utah in conjunction with the 1996 IEEE Data Compression Conference, which was held at the same location March 31 - April 3, 1996. The Space and Earth Science Data Compression sessions seek to explore opportunities for data compression to enhance the collection, analysis, and retrieval of space and earth science data. Of particular interest is data compression research that is integrated into, or has the potential to be integrated into, a particular space or earth science data information system. Preference is given to data compression research that takes into account the scien- tist's data requirements, and the constraints imposed by the data collection, transmission, distribution and archival systems

    Investigations in the design and analysis of key-stream generators

    Get PDF
    iv+113hlm.;24c

    The 2nd Conference of PhD Students in Computer Science

    Get PDF
    • …
    corecore