It has been widely observed that there exists a fundamental trade-off between
the minimum (Hamming) distance properties and the iterative decoding
convergence behavior of turbo-like codes. While capacity achieving code
ensembles typically are asymptotically bad in the sense that their minimum
distance does not grow linearly with block length, and they therefore exhibit
an error floor at moderate-to-high signal to noise ratios, asymptotically good
codes usually converge further away from channel capacity. In this paper, we
introduce the concept of tuned turbo codes, a family of asymptotically good
hybrid concatenated code ensembles, where asymptotic minimum distance growth
rates, convergence thresholds, and code rates can be traded-off using two
tuning parameters, {\lambda} and {\mu}. By decreasing {\lambda}, the asymptotic
minimum distance growth rate is reduced in exchange for improved iterative
decoding convergence behavior, while increasing {\lambda} raises the asymptotic
minimum distance growth rate at the expense of worse convergence behavior, and
thus the code performance can be tuned to fit the desired application. By
decreasing {\mu}, a similar tuning behavior can be achieved for higher rate
code ensembles.Comment: Accepted for publication in IEEE Transactions on Information Theor