12 research outputs found

    Design and performance of CDMA codes for multiuser communications

    Get PDF
    Walsh and Gold sequences are fixed power codes and are widely used in multiuser CDMA communications. Their popularity is due to the ease of implementation. Availability of these code sets is limited because of their generating kernels. Emerging radio applications like sensor networks or multiple service types in mobile and peer-to-peer communications networks might benefit from flexibilities in code lengths and possible allocation methodologies provided by large set of code libraries. Walsh codes are linear phase and zero mean with unique number of zero crossings for each sequence within the set. DC sequence is part of the Walsh code set. Although these features are quite beneficial for source coding applications, they are not essential for spread spectrum communications. By relaxing these unnecessary constraints, new sets of orthogonal binary user codes (Walsh-like) for different lengths are obtained with comparable BER performance to standard code sets in all channel conditions. Although fixed power codes are easier to implement, mathematically speaking, varying power codes offer lower inter- and intra-code correlations. With recent advances in RF power amplifier design, it might be possible to implement multiple level orthogonal spread spectrum codes for an efficient direct sequence CDMA system. A number of multiple level integer codes have been generated by brute force search method for different lengths to highlight possible BER performance improvement over binary codes. An analytical design method has been developed for multiple level (variable power) spread spectrum codes using Karhunen-Loeve Transform (KLT) technique. Eigen decomposition technique is used to generate spread spectrum basis functions that are jointly spread in time and frequency domains for a given covariance matrix or power spectral density function. Since this is a closed form solution for orthogonal code set design, many options are possible for different code lengths. Design examples and performance simulations showed that spread spectrum KLT codes outperform or closely match with the standard codes employed in present CDMA systems. Hybrid (Kronecker) codes are generated by taking Kronecker product of two spreading code families in a two-stage orthogonal transmultiplexer structure and are judiciously allocated to users such that their inter-code correlations are minimized. It is shown that, BER performance of hybrid codes with a code selection and allocation algorithm is better than the performance of standard Walsh or Gold code sets for asynchronous CDMA communications. A redundant spreading code technique is proposed utilizing multiple stage orthogonal transmultiplexer structure where each user has its own pre-multiplexer. Each data bit is redundantly spread in the pre-multiplexer stage of a user with odd number of redundancy, and at the receiver, majority logic decision is employed on the detected redundant bits to obtain overall performance improvement. Simulation results showed that redundant spreading method improves BER performance significantly at low SNR channel conditions

    Invariants for EA- and CCZ-equivalence of APN and AB functions

    Get PDF
    An (n,m)-function is a mapping from F2n{\mathbb {F}_{2}^{n}} to F2m{\mathbb {F}_{2}^{m}}. Such functions have numerous applications across mathematics and computer science, and in particular are used as building blocks of block ciphers in symmetric cryptography. The classes of APN and AB functions have been identified as cryptographically optimal with respect to the resistance against two of the most powerful known cryptanalytic attacks, namely differential and linear cryptanalysis. The classes of APN and AB functions are directly related to optimal objects in many other branches of mathematics, and have been a subject of intense study since at least the early 90’s. Finding new constructions of these functions is hard; one of the most significant practical issues is that any tentatively new function must be proven inequivalent to all the known ones. Testing equivalence can be significantly simplified by computing invariants, i.e. properties that are preserved by the respective equivalence relation. In this paper, we survey the known invariants for CCZ- and EA-equivalence, with a particular focus on their utility in distinguishing between inequivalent instances of APN and AB functions. We evaluate each invariant with respect to how easy it is to implement in practice, how efficiently it can be calculated on a computer, and how well it can distinguish between distinct EA- and CCZ-equivalence classes.publishedVersio

    Towards a deeper understanding of APN functions and related longstanding problems

    Get PDF
    This dissertation is dedicated to the properties, construction and analysis of APN and AB functions. Being cryptographically optimal, these functions lack any general structure or patterns, which makes their study very challenging. Despite intense work since at least the early 90's, many important questions and conjectures in the area remain open. We present several new results, many of which are directly related to important longstanding open problems; we resolve some of these problems, and make significant progress towards the resolution of others. More concretely, our research concerns the following open problems: i) the maximum algebraic degree of an APN function, and the Hamming distance between APN functions (open since 1998); ii) the classification of APN and AB functions up to CCZ-equivalence (an ongoing problem since the introduction of APN functions, and one of the main directions of research in the area); iii) the extension of the APN binomial x3+βx36x^3 + \beta x^{36} over F210F_{2^{10}} into an infinite family (open since 2006); iv) the Walsh spectrum of the Dobbertin function (open since 2001); v) the existence of monomial APN functions CCZ-inequivalent to ones from the known families (open since 2001); vi) the problem of efficiently and reliably testing EA- and CCZ-equivalence (ongoing, and open since the introduction of APN functions). In the course of investigating these problems, we obtain i.a. the following results: 1) a new infinite family of APN quadrinomials (which includes the binomial x3+βx36x^3 + \beta x^{36} over F210F_{2^{10}}); 2) two new invariants, one under EA-equivalence, and one under CCZ-equivalence; 3) an efficient and easily parallelizable algorithm for computationally testing EA-equivalence; 4) an efficiently computable lower bound on the Hamming distance between a given APN function and any other APN function; 5) a classification of all quadratic APN polynomials with binary coefficients over F2nF_{2^n} for n≤9n \le 9; 6) a construction allowing the CCZ-equivalence class of one monomial APN function to be obtained from that of another; 7) a conjecture giving the exact form of the Walsh spectrum of the Dobbertin power functions; 8) a generalization of an infinite family of APN functions to a family of functions with a two-valued differential spectrum, and an example showing that this Gold-like behavior does not occur for infinite families of quadratic APN functions in general; 9) a new class of functions (the so-called partially APN functions) defined by relaxing the definition of the APN property, and several constructions and non-existence results related to them.Doktorgradsavhandlin

    Convex reconstruction from structured measurements

    Get PDF
    Convex signal reconstruction is the art of solving ill-posed inverse problems via convex optimization. It is applicable to a great number of problems from engineering, signal analysis, quantum mechanics and many more. The most prominent example is compressed sensing, where one aims at reconstructing sparse vectors from an under-determined set of linear measurements. In many cases, one can prove rigorous performance guarantees for these convex algorithms. The combination of practical importance and theoretical tractability has directed a significant amount of attention to this young field of applied mathematics. However, rigorous proofs are usually only available for certain "generic cases"---for instance situations, where all measurements are represented by random Gaussian vectors. The focus of this thesis is to overcome this drawback by devising mathematical proof techniques can be applied to more "structured" measurements. Here, structure can have various meanings. E.g. it could refer to the type of measurements that occur in a given concrete application. Or, more abstractly, structure in the sense that a measurement ensemble is small and exhibits rich geometric features. The main focus of this thesis is phase retrieval: The problem of inferring phase information from amplitude measurements. This task is ubiquitous in, for instance, in crystallography, astronomy and diffraction imaging. Throughout this project, a series of increasingly better convex reconstruction guarantees have been established. On the one hand, we improved results for certain measurement models that mimic typical experimental setups in diffraction imaging. On the other hand, we identified spherical t-designs as a general purpose tool for the derandomization of data recovery schemes. Loosely speaking, a t-design is a finite configuration of vectors that is "evenly distributed" in the sense that it reproduces the first 2t moments of the uniform measure. Such configurations have been studied, for instance, in algebraic combinatorics, coding theory, and quantum information. We have shown that already spherical 4-designs allow for proving close-to-optimal convex reconstruction guarantees for phase retrieval. The success of this program depends on explicit constructions of spherical t-designs. In this regard, we have studied the design properties of stabilizer states. These are configurations of vectors that feature prominently in quantum information theory. Mathematically, they can be related to objects in discrete symplectic vector spaces---a structure we use heavily. We have shown that these vectors form a spherical 3-design and are, in some sense, close to a spherical 4-design. Putting these efforts together, we establish tight bounds on phase retrieval from stabilizer measurements. While working on the derandomization of phase retrieval, I obtained a number of results on other convex signal reconstruction problems. These include compressed sensing from anisotropic measurements, non-negative compressed sensing in the presence of noise and identifying improved convex regularizers for low rank matrix reconstruction. Going even further, the mathematical methods I used to tackle ill-posed inverse problems can be applied to a plethora of problems from quantum information theory. In particular, the causal structure behind Bell inequalities, new ways to compare experiments to fault-tolerance thresholds in quantum error correction, a novel benchmark for quantum state tomography via Bayesian estimation, and the task of distinguishing quantum states

    Evaluation of a novel, serum-based, biomarker screening test for colorectal cancer.

    Get PDF
    This study evaluates a new serum-based biomarker for colorectal cancer (CRC) screening and diagnosis. The biomarker (GTA-446) is a member of hydroxy -polyunsaturated ultra-long chain fatty acids and was found to be reduced in CRC patients compared to CRC-free subjects. Diagnostic test performance characteristics were used to identify the effectiveness of the test. Methods: Serum levels of GTA-446 were measured in 4924 subjects who underwent colonoscopy for any reason, pathology results and clinical data were also collected. Two sets of age-matched control subjects were used; First were the lab controls (number=383) which were serum samples collected from Saskatchewan Disease Control Laboratory along with age and gender data. Second, were the endoscopy controls (number=762) which were obtained from the colonoscopy population after being determined to be cancer-free. Cut-off values were calculated using Receiver Operating Characteristic (ROC) curve. Results: Serum GTA-446 was found to be reduced in 87% of CRC patients. Compared to lab controls, the GTA-446 biomarker has a sensitivity of 87%, specificity of 75%, positive likelihood ratio of 3.6, and negative likelihood ratio of 0.16. Using endoscopy controls to calculate test performance characteristics, the biomarker has a sensitivity of 87%, specificity of 50%, positive likelihood ratio of 1.74, and negative likelihood ratio of 0.24. Also, the level of GTA-446 was found to significantly decline with age (r=-0.20, p<0.01). Conclusion: Serum GTA-446 is a potential biomarker for minimally invasive detection of colorectal cancer that compares favorably to other serum-based biomarkers

    Some aspects of a code division multiple access local area network

    Get PDF
    Not Availabl

    Aerospace Medicine and Biology: A continuing bibliography with indexes, supplement 140

    Get PDF
    This bibliography lists 306 reports, articles, and other documents introduced into the NASA scientific and technical information system in March 1975

    A Salad of Block Ciphers

    Get PDF
    This book is a survey on the state of the art in block cipher design and analysis. It is work in progress, and it has been for the good part of the last three years -- sadly, for various reasons no significant change has been made during the last twelve months. However, it is also in a self-contained, useable, and relatively polished state, and for this reason I have decided to release this \textit{snapshot} onto the public as a service to the cryptographic community, both in order to obtain feedback, and also as a means to give something back to the community from which I have learned much. At some point I will produce a final version -- whatever being a ``final version\u27\u27 means in the constantly evolving field of block cipher design -- and I will publish it. In the meantime I hope the material contained here will be useful to other people
    corecore