239 research outputs found

    Minimum-Variance Importance-Sampling Bernoulli Estimator for Fast Simulation of Linear Block Codes over Binary Symmetric Channels

    Full text link
    In this paper the choice of the Bernoulli distribution as biased distribution for importance sampling (IS) Monte-Carlo (MC) simulation of linear block codes over binary symmetric channels (BSCs) is studied. Based on the analytical derivation of the optimal IS Bernoulli distribution, with explicit calculation of the variance of the corresponding IS estimator, two novel algorithms for fast-simulation of linear block codes are proposed. For sufficiently high signal-to-noise ratios (SNRs) one of the proposed algorithm is SNR-invariant, i.e. the IS estimator does not depend on the cross-over probability of the channel. Also, the proposed algorithms are shown to be suitable for the estimation of the error-correcting capability of the code and the decoder. Finally, the effectiveness of the algorithms is confirmed through simulation results in comparison to standard Monte Carlo method

    Coherence Optimization and Best Complex Antipodal Spherical Codes

    Full text link
    Vector sets with optimal coherence according to the Welch bound cannot exist for all pairs of dimension and cardinality. If such an optimal vector set exists, it is an equiangular tight frame and represents the solution to a Grassmannian line packing problem. Best Complex Antipodal Spherical Codes (BCASCs) are the best vector sets with respect to the coherence. By extending methods used to find best spherical codes in the real-valued Euclidean space, the proposed approach aims to find BCASCs, and thereby, a complex-valued vector set with minimal coherence. There are many applications demanding vector sets with low coherence. Examples are not limited to several techniques in wireless communication or to the field of compressed sensing. Within this contribution, existing analytical and numerical approaches for coherence optimization of complex-valued vector spaces are summarized and compared to the proposed approach. The numerically obtained coherence values improve previously reported results. The drawback of increased computational effort is addressed and a faster approximation is proposed which may be an alternative for time critical cases

    Polar Codes: Finite Length Implementation, Error Correlations and Multilevel Modulation

    Get PDF
    Shannon, in his seminal work, formalized the transmission of data over a communication channel and determined its fundamental limits. He characterized the relation between communication rate and error probability and showed that as long as the communication rate is below the capacity of the channel, error probability can be made as small as desirable by using appropriate coding over the communication channel and letting the codeword length approach infinity. He provided the formula for capacity of discrete memoryless channel. However, his proposed coding scheme was too complex to be practical in communication systems. Polar codes, recently introduced by Arıkan, are the first practical codes that are known to achieve the capacity for a large class of channel and have low encoding and decoding complexity. The original polar codes of Arıkan achieve a block error probability decaying exponentially in the square root of the block length as it goes to infinity. However, it is interesting to investigate their performance in finite length as this is the case in all practical communication schemes. In this dissertation, after a brief overview on polar codes, we introduce a practical framework for simulation of error correcting codes in general. We introduce the importance sampling concept to efficiently evaluate the performance of polar codes with finite bock length. Next, based on simulation results, we investigate the performance of different genie aided decoders to mitigate the poor performance of polar codes in low to moderate block length and propose single-error correction methods to improve the performance dramatically in expense of complexity of decoder. In this context, we also study the correlation between error events in a successive cancellation decoder. Finally, we investigate the performance of polar codes in non-binary channels. We compare the code construction of Sasoglu for Q-ary channels and classical multilevel codes. We construct multilevel polar codes for Q-ary channels and provide a thorough comparison of complexity and performance of two methods in finite length

    Unequal Error Protection Querying Policies for the Noisy 20 Questions Problem

    Full text link
    In this paper, we propose an open-loop unequal-error-protection querying policy based on superposition coding for the noisy 20 questions problem. In this problem, a player wishes to successively refine an estimate of the value of a continuous random variable by posing binary queries and receiving noisy responses. When the queries are designed non-adaptively as a single block and the noisy responses are modeled as the output of a binary symmetric channel the 20 questions problem can be mapped to an equivalent problem of channel coding with unequal error protection (UEP). A new non-adaptive querying strategy based on UEP superposition coding is introduced whose estimation error decreases with an exponential rate of convergence that is significantly better than that of the UEP repetition coding introduced by Variani et al. (2015). With the proposed querying strategy, the rate of exponential decrease in the number of queries matches the rate of a closed-loop adaptive scheme where queries are sequentially designed with the benefit of feedback. Furthermore, the achievable error exponent is significantly better than that of random block codes employing equal error protection.Comment: To appear in IEEE Transactions on Information Theor

    Enhanced optical tweezing: from hydrodynamic micro-manipulation to optimised optical trapping

    Get PDF
    Optical tweezers, with their ability to trap particles at the focus of a laser beam and control their motion, have provided unparalleled insight into the inner workings of the micro-world of colloids, cells and biomolecules. However, not all materials yield to optical trapping, and living organisms can be damaged by direct light exposure. Many optical tweezing experiments are performed in aqueous environments. This offers a route to indirect particle manipulation via the surrounding fluid. We develop, study, and experimentally demonstrate an approach which, by employing optically controlled micro-rotors to induce flow currents in the surrounding fluid, exerts near field hydrodynamic control over freely diffusing particles, irrespective of their material. With our optically actuated hydrodynamic manipulation we were able to suppress the thermal motion of single sedimented micro-sized target particles of various materials in both translational and rotational degrees-of-freedom, translate individual particles over complex local trajectories or transport them over long distances across the holding sample cell, and exert control over multiple particles simultaneously. The biggest challenge in our hydrodynamic manipulation technique is the accuracy with which the optical tweezers can control the actuator motion. This boils down to optical trapping stiffness. We employ the generalised Wigner-Smith operators alongside an optimisation scheme to upgrade the optical trapping field from the standard Gaussian beam to a three-dimensionally stiffness-enhanced trap. Within simulations we demonstrate light fields with an order of magnitude stiffness enhancement in all three dimensions simultaneously. Such fields, as well as the techniques used to develop them, can find applications throughout the wide community of optical trapping and manipulation

    Device-Independent Quantum Key Distribution

    Full text link
    Cryptographic key exchange protocols traditionally rely on computational conjectures such as the hardness of prime factorisation to provide security against eavesdropping attacks. Remarkably, quantum key distribution protocols like the one proposed by Bennett and Brassard provide information-theoretic security against such attacks, a much stronger form of security unreachable by classical means. However, quantum protocols realised so far are subject to a new class of attacks exploiting implementation defects in the physical devices involved, as demonstrated in numerous ingenious experiments. Following the pioneering work of Ekert proposing the use of entanglement to bound an adversary's information from Bell's theorem, we present here the experimental realisation of a complete quantum key distribution protocol immune to these vulnerabilities. We achieve this by combining theoretical developments on finite-statistics analysis, error correction, and privacy amplification, with an event-ready scheme enabling the rapid generation of high-fidelity entanglement between two trapped-ion qubits connected by an optical fibre link. The secrecy of our key is guaranteed device-independently: it is based on the validity of quantum theory, and certified by measurement statistics observed during the experiment. Our result shows that provably secure cryptography with real-world devices is possible, and paves the way for further quantum information applications based on the device-independence principle.Comment: 5+1 pages in main text and methods with 4 figures and 1 table; 37 pages of supplementary materia
    • …
    corecore