1,213 research outputs found

    Deterministic Construction of Binary, Bipolar and Ternary Compressed Sensing Matrices

    Full text link
    In this paper we establish the connection between the Orthogonal Optical Codes (OOC) and binary compressed sensing matrices. We also introduce deterministic bipolar m×nm\times n RIP fulfilling ±1\pm 1 matrices of order kk such that mO(k(log2n)log2klnlog2k)m\leq\mathcal{O}\big(k (\log_2 n)^{\frac{\log_2 k}{\ln \log_2 k}}\big). The columns of these matrices are binary BCH code vectors where the zeros are replaced by -1. Since the RIP is established by means of coherence, the simple greedy algorithms such as Matching Pursuit are able to recover the sparse solution from the noiseless samples. Due to the cyclic property of the BCH codes, we show that the FFT algorithm can be employed in the reconstruction methods to considerably reduce the computational complexity. In addition, we combine the binary and bipolar matrices to form ternary sensing matrices ({0,1,1}\{0,1,-1\} elements) that satisfy the RIP condition.Comment: The paper is accepted for publication in IEEE Transaction on Information Theor

    Fast Decoder for Overloaded Uniquely Decodable Synchronous CDMA

    Full text link
    We consider the problem of designing a fast decoder for antipodal uniquely decodable (errorless) code sets for overloaded synchronous code-division multiple access (CDMA) systems where the number of signals K_{max}^a is the largest known for the given code length L. The proposed decoder is designed in a such a way that the users can uniquely recover the information bits with a very simple decoder, which uses only a few comparisons. Compared to maximum-likelihood (ML) decoder, which has a high computational complexity for even moderate code length, the proposed decoder has a much lower computational complexity. Simulation results in terms of bit error rate (BER) demonstrate that the performance of the proposed decoder only has a 1-2 dB degradation at BER of 10^{-3} when compared to ML

    Algebraic number theory and code design for Rayleigh fading channels

    Get PDF
    Algebraic number theory is having an increasing impact in code design for many different coding applications, such as single antenna fading channels and more recently, MIMO systems. Extended work has been done on single antenna fading channels, and algebraic lattice codes have been proven to be an effective tool. The general framework has been settled in the last ten years and many explicit code constructions based on algebraic number theory are now available. The aim of this work is to provide both an overview on algebraic lattice code designs for Rayleigh fading channels, as well as a tutorial introduction to algebraic number theory. The basic facts of this mathematical field will be illustrated by many examples and by the use of a computer algebra freeware in order to make it more accessible to a large audience

    Investigation on Evolving Single-Carrier NOMA into Multi-Carrier NOMA in 5G

    Full text link
    © 2013 IEEE. Non-orthogonal multiple access (NOMA) is one promising technology, which provides high system capacity, low latency, and massive connectivity, to address several challenges in the fifth-generation wireless systems. In this paper, we first reveal that the NOMA techniques have evolved from single-carrier NOMA (SC-NOMA) into multi-carrier NOMA (MC-NOMA). Then, we comprehensively investigated on the basic principles, enabling schemes and evaluations of the two most promising MC-NOMA techniques, namely sparse code multiple access (SCMA) and pattern division multiple access (PDMA). Meanwhile, we consider that the research challenges of SCMA and PDMA might be addressed with the stimulation of the advanced and matured progress in SC-NOMA. Finally, yet importantly, we investigate the emerging applications, and point out the future research trends of the MC-NOMA techniques, which could be straightforwardly inspired by the various deployments of SC-NOMA

    Investigation of Non-coherent Discrete Target Range Estimation Techniques for High-precision Location

    Get PDF
    Ranging is an essential and crucial task for radar systems. How to solve the range-detection problem effectively and precisely is massively important. Meanwhile, unambiguity and high resolution are the points of interest as well. Coherent and non-coherent techniques can be applied to achieve range estimation, and both of them have advantages and disadvantages. Coherent estimates offer higher precision but are more vulnerable to noise and clutter and phase wrap errors, particularly in a complex or harsh environment, while the non-coherent approaches are simpler but provide lower precision. With the purpose of mitigating inaccuracy and perturbation in range estimation, miscellaneous techniques are employed to achieve optimally precise detection. Numerous elegant processing solutions stemming from non-coherent estimate are now introduced into the coherent realm, and vice versa. This thesis describes two non-coherent ranging estimate techniques with novel algorithms to mitigate the instinct deficit of non-coherent ranging approaches. One technique is based on peak detection and realised by Kth-order Polynomial Interpolation, while another is based on Z-transform and realised by Most-likelihood Chirp Z-transform. A two-stage approach for the fine ranging estimate is applied to the Discrete Fourier transform domain of both algorithms. An N-point Discrete Fourier transform is implemented to attain a coarse estimation; an accurate process around the point of interest determined in the first stage is conducted. For KPI technique, it interpolates around the peak of Discrete Fourier transform profiles of the chirp signal to achieve accurate interpolation and optimum precision. For Most-likelihood Chirp Z-transform technique, the Chirp Z-transform accurately implements the periodogram where only a narrow band spectrum is processed. Furthermore, the concept of most-likelihood estimator is introduced to combine with Chirp Z-transform to acquire better ranging performance. Cramer-Rao lower bound is presented to evaluate the performance of these two techniques from the perspective of statistical signal processing. Mathematical derivation, simulation modelling, theoretical analysis and experimental validation are conducted to assess technique performance. Further research will be pushed forward to algorithm optimisation and system development of a location system using non-coherent techniques and make a comparison to a coherent approach

    Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches

    Get PDF
    Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensin

    The Security of Practical Quantum Key Distribution

    Full text link
    Quantum key distribution (QKD) is the first quantum information task to reach the level of mature technology, already fit for commercialization. It aims at the creation of a secret key between authorized partners connected by a quantum channel and a classical authenticated channel. The security of the key can in principle be guaranteed without putting any restriction on the eavesdropper's power. The first two sections provide a concise up-to-date review of QKD, biased toward the practical side. The rest of the paper presents the essential theoretical tools that have been developed to assess the security of the main experimental platforms (discrete variables, continuous variables and distributed-phase-reference protocols).Comment: Identical to the published version, up to cosmetic editorial change
    corecore