31 research outputs found

    Information Encoding for Flow Watermarking and Binding Keys to Biometric Data

    Get PDF
    Due to the current level of telecommunications development, fifth-generation (5G) communication systems are expected to provide higher data rates, lower latency, and improved scalability. To ensure the security and reliability of data traffic generated from wireless sources, 5G networks must be designed to support security protocols and reliable communication applications. The operations of coding and processing of information during the transmission of both binary and non-binary data in nonstandard communication channels are described. A subclass of linear binary codes is considered, which are both Varshamov-Tenengolz codes and are used for channels with insertions and deletions of symbols. The use of these codes is compared with Hidden Markov Model (HMM)-based systems for detecting intrusions in networks using flow watermarking, which provide high true positive rate in both cases. The principles of using Bose-Chadhuri-Hocquenhgem (BCH) codes, non-binary Reed-Solomon codes, and turbo codes, as well as concatenated code structures to ensure noise immunity when reproducing information in Helper-Data Systems are considered. Examples of biometric systems organization based on the use of these codes, operating on the basis of the Fuzzy Commitment Scheme (FCS) and providing FRR < 1% for authentication, are given

    An Improved Decoding Algorithm for the Davey-MacKay Construction

    Get PDF
    The Deletion-Insertion Correcting Code construction proposed by Davey and MacKay consists of an inner code that recovers synchronization and an outer code that provides substitution error protection. The inner code uses low-weight codewords which are added (modulo two) to a pilot sequence. The receiver is able to synchronise on the pilot sequence in spite of the changes introduced by the added codeword. The original bit-level formulation of the inner decoder assumes that all bits in the sparse codebook are identically and independently distributed. Not only is this assumption inaccurate, but it also prevents the use of soft a- priori input to the decoder. We propose an alternative symbol-level inner decoding algorithm that takes the actual codebook into account. Simulation results show that the proposed algorithm has an improved performance with only a small penalty in complexity, and it allows other improvements using inner codes with larger minimum distance

    SECURING BIOMETRIC DATA

    Get PDF

    SECURING BIOMETRIC DATA

    Get PDF

    Graphics processing unit implementation and optimisation of a flexible maximum a-posteriori decoder for synchronisation correction

    Get PDF
    The problem of correcting synchronisation errors has recently seen an increase in interest [1]. We believe this is because of two factors: recent applications for such codes, where traditional techniques for synchronisation cannot be applied, and the feasibility of decoding because of improvements in computing resources. A recent application is for bit-patterned media [2, 3], where written-in errors can be modelled as synchronisation errors. Bit-patterned media is of great interest to the magnetic recording industry because of the potential increase in writing density. Another example is robust digital watermarking, where a message is embedded into a media file and an attacker seeks to make the message unreadable. An effective attack is to cause loss of synchronisation; synchronisation-correcting codes have been successfully applied to resist these attacks in speech [4] and image [5] watermarking. Most practical decoders for synchronisation correction work by extending the state space of the underlying code to account for the state of the channel (which represents the synchronisation error). This increases the decoding complexity significantly, particularly under poor channel conditions where the state space is necessarily larger. Although optimal decoding is achievable, the complexity involved remains a barrier for wider adoption. The problem is even more pronounced when these codes are part of an iteratively decoded construction. A key practical synchronisation-correcting scheme is the concatenated construction by Davey and MacKay [6], where the inner code tracks synchronisation on an unbounded random insertion and deletion channel. We presented a maximum a-posteriori (MAP) decoder for a generalised construction of the inner code in [7] and improved encodings in [8]. In [9], we presented a parallel implementation of our maximum a-posteriori (MAP) decoder on a graphics processing unit (GPU) using NVIDIA’s Compute Unified Device Architecture (CUDA) [10]. This resulted in a decoding speedup of up to two orders of magnitude, depending on code parameters and channel conditions. Since that work we have also presented a number of additional improvements to the MAP decoder algorithm [11], resulting in a speedup of over an order of magnitude in a serial implementation, as we shall show. Unfortunately, these algorithmic improvements change the proportion of time spent computing the various equations, so that a straightforward application of the algorithm improvements to our earlier GPU implementation does not yield the expected speedup. A more careful parallelisation strategy is required, which we discuss in this paper.peer-reviewe
    corecore