25,401 research outputs found

    Turbo Decoding and Detection for Wireless Applications

    Get PDF
    A historical perspective of turbo coding and turbo transceivers inspired by the generic turbo principles is provided, as it evolved from Shannon’s visionary predictions. More specifically, we commence by discussing the turbo principles, which have been shown to be capable of performing close to Shannon’s capacity limit. We continue by reviewing the classic maximum a posteriori probability decoder. These discussions are followed by studying the effect of a range of system parameters in a systematic fashion, in order to gauge their performance ramifications. In the second part of this treatise, we focus our attention on the family of iterative receivers designed for wireless communication systems, which were partly inspired by the invention of turbo codes. More specifically, the family of iteratively detected joint coding and modulation schemes, turbo equalization, concatenated spacetime and channel coding arrangements, as well as multi-user detection and three-stage multimedia systems are highlighted

    A Belief Propagation Based Framework for Soft Multiple-Symbol Differential Detection

    Full text link
    Soft noncoherent detection, which relies on calculating the \textit{a posteriori} probabilities (APPs) of the bits transmitted with no channel estimation, is imperative for achieving excellent detection performance in high-dimensional wireless communications. In this paper, a high-performance belief propagation (BP)-based soft multiple-symbol differential detection (MSDD) framework, dubbed BP-MSDD, is proposed with its illustrative application in differential space-time block-code (DSTBC)-aided ultra-wideband impulse radio (UWB-IR) systems. Firstly, we revisit the signal sampling with the aid of a trellis structure and decompose the trellis into multiple subtrellises. Furthermore, we derive an APP calculation algorithm, in which the forward-and-backward message passing mechanism of BP operates on the subtrellises. The proposed BP-MSDD is capable of significantly outperforming the conventional hard-decision MSDDs. However, the computational complexity of the BP-MSDD increases exponentially with the number of MSDD trellis states. To circumvent this excessive complexity for practical implementations, we reformulate the BP-MSDD, and additionally propose a Viterbi algorithm (VA)-based hard-decision MSDD (VA-HMSDD) and a VA-based soft-decision MSDD (VA-SMSDD). Moreover, both the proposed BP-MSDD and VA-SMSDD can be exploited in conjunction with soft channel decoding to obtain powerful iterative detection and decoding based receivers. Simulation results demonstrate the effectiveness of the proposed algorithms in DSTBC-aided UWB-IR systems.Comment: 14 pages, 12 figures, 3 tables, accepted to appear on IEEE Transactions on Wireless Communications, Aug. 201

    Gaussian Message Passing for Overloaded Massive MIMO-NOMA

    Full text link
    This paper considers a low-complexity Gaussian Message Passing (GMP) scheme for a coded massive Multiple-Input Multiple-Output (MIMO) systems with Non-Orthogonal Multiple Access (massive MIMO-NOMA), in which a base station with NsN_s antennas serves NuN_u sources simultaneously in the same frequency. Both NuN_u and NsN_s are large numbers, and we consider the overloaded cases with Nu>NsN_u>N_s. The GMP for MIMO-NOMA is a message passing algorithm operating on a fully-connected loopy factor graph, which is well understood to fail to converge due to the correlation problem. In this paper, we utilize the large-scale property of the system to simplify the convergence analysis of the GMP under the overloaded condition. First, we prove that the \emph{variances} of the GMP definitely converge to the mean square error (MSE) of Linear Minimum Mean Square Error (LMMSE) multi-user detection. Secondly, the \emph{means} of the traditional GMP will fail to converge when Nu/Ns<(21)25.83 N_u/N_s< (\sqrt{2}-1)^{-2}\approx5.83. Therefore, we propose and derive a new convergent GMP called scale-and-add GMP (SA-GMP), which always converges to the LMMSE multi-user detection performance for any Nu/Ns>1N_u/N_s>1, and show that it has a faster convergence speed than the traditional GMP with the same complexity. Finally, numerical results are provided to verify the validity and accuracy of the theoretical results presented.Comment: Accepted by IEEE TWC, 16 pages, 11 figure

    Estimating European volatile organic compound emissions using satellite observations of formaldehyde from the Ozone Monitoring Instrument

    Get PDF
    Emission of non-methane Volatile Organic Compounds (VOCs) to the atmosphere stems from biogenic and human activities, and their estimation is difficult because of the many and not fully understood processes involved. In order to narrow down the uncertainty related to VOC emissions, which negatively reflects on our ability to simulate the atmospheric composition, we exploit satellite observations of formaldehyde (HCHO), an ubiquitous oxidation product of most VOCs, focusing on Europe. HCHO column observations from the Ozone Monitoring Instrument (OMI) reveal a marked seasonal cycle with a summer maximum and winter minimum. In summer, the oxidation of methane and other long-lived VOCs supply a slowly varying background HCHO column, while HCHO variability is dominated by most reactive VOC, primarily biogenic isoprene followed in importance by biogenic terpenes and anthropogenic VOCs. The chemistry-transport model CHIMERE qualitatively reproduces the temporal and spatial features of the observed HCHO column, but display regional biases which are attributed mainly to incorrect biogenic VOC emissions, calculated with the Model of Emissions of Gases and Aerosol from Nature (MEGAN) algorithm. These "bottom-up" or a-priori emissions are corrected through a Bayesian inversion of the OMI HCHO observations. Resulting "top-down" or a-posteriori isoprene emissions are lower than "bottom-up" by 40% over the Balkans and by 20% over Southern Germany, and higher by 20% over Iberian Peninsula, Greece and Italy. We conclude that OMI satellite observations of HCHO can provide a quantitative "top-down" constraint on the European "bottom-up" VOC inventories

    Reduced-complexity non-coherent soft-decision-aided DAPSK dispensing with channel estimation

    No full text
    Differential Amplitude Phase Shift Keying (DAPSK), which is also known as star-shaped QAM has implementational advantages not only due to dispensing with channel estimation, but also as a benefit of its low signal detection complexity. It is widely recognized that separately detecting the amplitude and the phase of a received DAPSK symbol exhibits a lower complexity than jointly detecting the two terms. However, since the amplitude and the phase of a DAPSK symbol are affected by the correlated magnitude fading and phase-rotations, detecting the two terms completely independently results in a performance loss, which is especially significant for soft-decision-aided DAPSK detectors relying on multiple receive antennas. Therefore, in this contribution, we propose a new soft-decision-aided DAPSK detection method, which achieves the optimum DAPSK detection capability at a substantially reduced detection complexity. More specifically, we link each a priori soft input bit to a specific part of the channel's output, so that only a reduced subset of the DAPSK constellation points has to be evaluated by the soft DAPSK detector. Our simulation results demonstrate that the proposed soft DAPSK detector exhibits a lower detection complexity than that of independently detecting the amplitude and the phase, while the optimal performance of DAPSK detection is retained

    Coarse-to-Fine Adaptive People Detection for Video Sequences by Maximizing Mutual Information

    Full text link
    Applying people detectors to unseen data is challenging since patterns distributions, such as viewpoints, motion, poses, backgrounds, occlusions and people sizes, may significantly differ from the ones of the training dataset. In this paper, we propose a coarse-to-fine framework to adapt frame by frame people detectors during runtime classification, without requiring any additional manually labeled ground truth apart from the offline training of the detection model. Such adaptation make use of multiple detectors mutual information, i.e., similarities and dissimilarities of detectors estimated and agreed by pair-wise correlating their outputs. Globally, the proposed adaptation discriminates between relevant instants in a video sequence, i.e., identifies the representative frames for an adaptation of the system. Locally, the proposed adaptation identifies the best configuration (i.e., detection threshold) of each detector under analysis, maximizing the mutual information to obtain the detection threshold of each detector. The proposed coarse-to-fine approach does not require training the detectors for each new scenario and uses standard people detector outputs, i.e., bounding boxes. The experimental results demonstrate that the proposed approach outperforms state-of-the-art detectors whose optimal threshold configurations are previously determined and fixed from offline training dataThis work has been partially supported by the Spanish government under the project TEC2014-53176-R (HAVideo

    Terrain analysis using radar shape-from-shading

    Get PDF
    This paper develops a maximum a posteriori (MAP) probability estimation framework for shape-from-shading (SFS) from synthetic aperture radar (SAR) images. The aim is to use this method to reconstruct surface topography from a single radar image of relatively complex terrain. Our MAP framework makes explicit how the recovery of local surface orientation depends on the whereabouts of terrain edge features and the available radar reflectance information. To apply the resulting process to real world radar data, we require probabilistic models for the appearance of terrain features and the relationship between the orientation of surface normals and the radar reflectance. We show that the SAR data can be modeled using a Rayleigh-Bessel distribution and use this distribution to develop a maximum likelihood algorithm for detecting and labeling terrain edge features. Moreover, we show how robust statistics can be used to estimate the characteristic parameters of this distribution. We also develop an empirical model for the SAR reflectance function. Using the reflectance model, we perform Lambertian correction so that a conventional SFS algorithm can be applied to the radar data. The initial surface normal direction is constrained to point in the direction of the nearest ridge or ravine feature. Each surface normal must fall within a conical envelope whose axis is in the direction of the radar illuminant. The extent of the envelope depends on the corrected radar reflectance and the variance of the radar signal statistics. We explore various ways of smoothing the field of surface normals using robust statistics. Finally, we show how to reconstruct the terrain surface from the smoothed field of surface normal vectors. The proposed algorithm is applied to various SAR data sets containing relatively complex terrain structure

    Quantize and forward cooperative communication: joint channel and frequency offset estimation

    Get PDF

    Point Process Algorithm: A New Bayesian Approach for Planet Signal Extraction with the Terrestrial Planet Finder

    Get PDF
    The capability of the Terrestrial Planet Finder Interferometer (TPF-I) for planetary signal extraction, including both detection and spectral characterization, can be optimized by taking proper account of instrumental characteristics and astrophysical prior information. We have developed the Point Process Algorithm (PPA), a Bayesian technique for extracting planetary signals using the sine-chopped outputs of a dual nulling interferometer. It is so-called because it represents the system being observed as a set of points in a suitably-defined state space, thus providing a natural way of incorporating our prior knowledge of the compact nature of the targets of interest. It can also incorporate the spatial covariance of the exozodi as prior information which could help mitigate against false detections. Data at multiple wavelengths are used simultaneously, taking into account possible spectral variations of the planetary signals. Input parameters include the RMS measurement noise and the a priori probability of the presence of a planet. The output can be represented as an image of the intensity distribution on the sky, optimized for the detection of point sources. Previous approaches by others to the problem of planet detection for TPF-I have relied on the potentially non-robust identification of peaks in a "dirty" image, usually a correlation map. Tests with synthetic data suggest that the PPA provides greater sensitivity to faint sources than does the standard approach (correlation map + CLEAN), and will be a useful tool for optimizing the design of TPF-I.Comment: 17 pages, 6 figures. AJ in press (scheduled for Nov 2006
    corecore