1,623 research outputs found
A survey of the state of the art and focused research in range systems, task 2
Contract generated publications are compiled which describe the research activities for the reporting period. Study topics include: equivalent configurations of systolic arrays; least squares estimation algorithms with systolic array architectures; modeling and equilization of nonlinear bandlimited satellite channels; and least squares estimation and Kalman filtering by systolic arrays
A survey on fiber nonlinearity compensation for 400 Gbps and beyond optical communication systems
Optical communication systems represent the backbone of modern communication
networks. Since their deployment, different fiber technologies have been used
to deal with optical fiber impairments such as dispersion-shifted fibers and
dispersion-compensation fibers. In recent years, thanks to the introduction of
coherent detection based systems, fiber impairments can be mitigated using
digital signal processing (DSP) algorithms. Coherent systems are used in the
current 100 Gbps wavelength-division multiplexing (WDM) standard technology.
They allow the increase of spectral efficiency by using multi-level modulation
formats, and are combined with DSP techniques to combat the linear fiber
distortions. In addition to linear impairments, the next generation 400 Gbps/1
Tbps WDM systems are also more affected by the fiber nonlinearity due to the
Kerr effect. At high input power, the fiber nonlinear effects become more
important and their compensation is required to improve the transmission
performance. Several approaches have been proposed to deal with the fiber
nonlinearity. In this paper, after a brief description of the Kerr-induced
nonlinear effects, a survey on the fiber nonlinearity compensation (NLC)
techniques is provided. We focus on the well-known NLC techniques and discuss
their performance, as well as their implementation and complexity. An extension
of the inter-subcarrier nonlinear interference canceler approach is also
proposed. A performance evaluation of the well-known NLC techniques and the
proposed approach is provided in the context of Nyquist and super-Nyquist
superchannel systems.Comment: Accepted in the IEEE Communications Surveys and Tutorial
Bit error performance of diffuse indoor optical wireless channel pulse position modulation system employing artificial neural networks for channel equalisation
The bit-error rate (BER) performance of a pulse position modulation (PPM) scheme for non-line-of-sight indoor optical links employing channel equalisation based on the artificial neural network (ANN) is reported. Channel equalisation is achieved by training a multilayer perceptrons ANN. A comparative study of the unequalised `soft' decision decoding and the `hard' decision decoding along with the neural equalised `soft' decision decoding is presented for different bit resolutions for optical channels with different delay spread. We show that the unequalised `hard' decision decoding performs the worst for all values of normalised delayed spread, becoming impractical beyond a normalised delayed spread of 0.6. However, `soft' decision decoding with/without equalisation displays relatively improved performance for all values of the delay spread. The study shows that for a highly diffuse channel, the signal-to-noise ratio requirement to achieve a BER of 10ĂąËâ5 for the ANN-based equaliser is ~10 dB lower compared with the unequalised `soft' decoding for 16-PPM at a data rate of 155 Mbps. Our results indicate that for all range of delay spread, neural network equalisation is an effective tool of mitigating the inter-symbol interference
Coded Modulation Assisted Radial Basis Function Aided Turbo Equalisation for Dispersive Rayleigh Fading Channels
In this contribution a range of Coded Modulation (CM) assisted Radial Basis Function (RBF) based Turbo Equalisation (TEQ) schemes are investigated when communicating over dispersive Rayleigh fading channels. Specifically, 16QAM based Trellis Coded Modulation (TCM), Turbo TCM (TTCM), Bit-Interleaved Coded Modulation (BICM) and iteratively decoded BICM (BICM-ID) are evaluated in the context of an RBF based TEQ scheme and a reduced-complexity RBF based In-phase/Quadrature-phase (I/Q) TEQ scheme. The Least Mean Square (LMS) algorithm was employed for channel estimation, where the initial estimation step-size used was 0.05, which was reduced to 0.01 for the second and the subsequent TEQ iterations. The achievable coding gain of the various CM schemes was significantly increased, when employing the proposed RBF-TEQ or RBF-I/Q-TEQ rather than the conventional non-iterative Decision Feedback Equaliser - (DFE). Explicitly, the reduced-complexity RBF-I/Q-TEQ-CM achieved a similar performance to the full-complexity RBF-TEQ-CM, while attaining a significant complexity reduction. The best overall performer was the RBF-I/Q-TEQ-TTCM scheme, requiring only 1.88~dB higher SNR at BER=10-5, than the identical throughput 3~BPS uncoded 8PSK scheme communicating over an AWGN channel. The coding gain of the scheme was 16.78-dB
MIMO decision feedback equalization from an Hâ perspective
We approach the multiple input multiple output (MIMO) decision feedback equalization (DFE) problem in digital communications from an Hâ estimation point of view. Using the standard (and simplifying) assumption that all previous decisions are correct, we obtain an explicit parameterization of all Hâ optimal DFEs. In particular, we show that, under the above assumption, minimum mean square error (MMSE) DFEs are Hâ optimal. The Hâ approach also suggests a method for dealing with errors in previous decisions
Digital communications techniques Interim report, 15 Sep. 1969 - 15 Feb. 1970
Convolutional codes and recursive signal processing for digital communication
Iterative Decoding and Turbo Equalization: The Z-Crease Phenomenon
Iterative probabilistic inference, popularly dubbed the soft-iterative
paradigm, has found great use in a wide range of communication applications,
including turbo decoding and turbo equalization. The classic approach of
analyzing the iterative approach inevitably use the statistical and
information-theoretical tools that bear ensemble-average flavors. This paper
consider the per-block error rate performance, and analyzes it using nonlinear
dynamical theory. By modeling the iterative processor as a nonlinear dynamical
system, we report a universal "Z-crease phenomenon:" the zig-zag or up-and-down
fluctuation -- rather than the monotonic decrease -- of the per-block errors,
as the number of iteration increases. Using the turbo decoder as an example, we
also report several interesting motion phenomenons which were not previously
reported, and which appear to correspond well with the notion of "pseudo
codewords" and "stopping/trapping sets." We further propose a heuristic
stopping criterion to control Z-crease and identify the best iteration. Our
stopping criterion is most useful for controlling the worst-case per-block
errors, and helps to significantly reduce the average-iteration numbers.Comment: 6 page
- âŠ