11 research outputs found

    Linear MMSE-Optimal Turbo Equalization Using Context Trees

    Get PDF
    Formulations of the turbo equalization approach to iterative equalization and decoding vary greatly when channel knowledge is either partially or completely unknown. Maximum aposteriori probability (MAP) and minimum mean square error (MMSE) approaches leverage channel knowledge to make explicit use of soft information (priors over the transmitted data bits) in a manner that is distinctly nonlinear, appearing either in a trellis formulation (MAP) or inside an inverted matrix (MMSE). To date, nearly all adaptive turbo equalization methods either estimate the channel or use a direct adaptation equalizer in which estimates of the transmitted data are formed from an expressly linear function of the received data and soft information, with this latter formulation being most common. We study a class of direct adaptation turbo equalizers that are both adaptive and nonlinear functions of the soft information from the decoder. We introduce piecewise linear models based on context trees that can adaptively approximate the nonlinear dependence of the equalizer on the soft information such that it can choose both the partition regions as well as the locally linear equalizer coefficients in each region independently, with computational complexity that remains of the order of a traditional direct adaptive linear equalizer. This approach is guaranteed to asymptotically achieve the performance of the best piecewise linear equalizer and we quantify the MSE performance of the resulting algorithm and the convergence of its MSE to that of the linear minimum MSE estimator as the depth of the context tree and the data length increase.Comment: Submitted to the IEEE Transactions on Signal Processin

    Robust estimation in flat fading channels under bounded channel uncertainties

    Get PDF
    Cataloged from PDF version of article.We investigate channel equalization problem for time-varying flat fading channels under bounded channel uncertainties. We analyze three robust methods to estimate an unknown signal transmitted through a time-varying flat fading channel. These methods are based on minimizing certain meansquare error criteria that incorporate the channel uncertainties into their problem formulations instead of directly using the inaccurate channel information that is available. We present closed-form solutions to the channel equalization problems for each method and for both zero mean and nonzero mean signals. We illustrate the performances of the equalization methods through simulations. © 2013 Elsevier Inc. All rights reserved

    Tracking the best level set in a level-crossing analog-to-digital converter

    Get PDF
    Cataloged from PDF version of article.In this paper, we investigate level-crossing (LC) analog-to-digital converters (ADC)s in a competitive algorithm framework. In particular, we study how the level sets of an LC ADC should be selected in order to track the dynamical changes in the analog signal for effective sampling. We introduce a sequential LC sampling algorithm asymptotically achieving the performance of the best LC sampling method which can choose both its LC sampling levels (from a large class of possible level sets) and the intervals (from the continuum of all possible intervals) that these levels are used based on observing the whole analog signal in hindsight. The results we introduce are guaranteed to hold in an individual signal manner without any stochastic assumptions on the underlying signal. © 2012 Published by Elsevier Inc

    Zero-rate feedback can achieve the empirical capacity

    Full text link
    The utility of limited feedback for coding over an individual sequence of DMCs is investigated. This study complements recent results showing how limited or noisy feedback can boost the reliability of communication. A strategy with fixed input distribution PP is given that asymptotically achieves rates arbitrarily close to the mutual information induced by PP and the state-averaged channel. When the capacity achieving input distribution is the same over all channel states, this achieves rates at least as large as the capacity of the state averaged channel, sometimes called the empirical capacity.Comment: Revised version of paper originally submitted to IEEE Transactions on Information Theory, Nov. 2007. This version contains further revisions and clarification

    Machine learning in quantitative finance

    Get PDF
    This thesis consists of the three chapters. Chapter 1 aims to decrease the time complexity of multi-output relevance vector regression from O(VM^3) to O(V^3+M^3), where V is the number of output dimensions, M is the number of basis functions, and V<M. The experimental results demonstrate that the proposed method is more competitive than the existing method, with regard to computation time. MATLAB codes are available at http://www.mathworks.com/matlabcentral/fileexchange/49131. The performance of online (sequential) portfolio selection (OPS), which rebalances a portfolio in every period (e.g. daily or weekly) in order to maximise the portfolio's expected terminal wealth in the long run, has been overestimated by the ideal assumption of unlimited market liquidity (i.e. no market impact costs). Therefore, a new transaction cost factor model that considers market impact costs, estimated from limit order book data, as well as proportional transaction costs (e.g. brokerage commissions or transaction taxes in a fixed percentage) is proposed in Chapter 2 for both measuring OPS performance in a more practical way and developing a new OPS method. Backtesting results from the historical limit order book data of NASDAQ-traded stocks show both the performance deterioration of OPS by the market impact costs and the superiority of the proposed OPS method in the environment of limited market liquidity. MATLAB codes are available at http://www.mathworks.com/matlabcentral/fileexchange/56496. Chapter 3 proposes an optimal intraday trading strategy to absorb the shock to the stock market when an online portfolio selection algorithm rebalances a portfolio. It considers real-time data of limit order books and splits a very large market order into a number of consecutive market orders to minimise overall transaction costs, consisting of market impact costs as well as proportional transaction costs. To be specific, it optimises both the number of intraday tradings and an intraday trading path for a multi-asset portfolio. Backtesting results from the historical limit order book data of NASDAQ-traded stocks show the superiority of the proposed trading algorithm in the environment of limited market liquidity. MATLAB codes are available at http://www.mathworks.com/matlabcentral/fileexchange/62503

    Portfolio construction under information asymmetry

    Get PDF
    We introduce in this thesis the idea of a variable lookback model, i.e., a model whose predictions are based on a variable portion of the information set. We verify the intuition of this model in the context of experimental finance. We also propose a novel algorithm to estimate it, the variable lookback algorithm, and apply the latter to build investment strategies. Financial markets under information asymmetry are characterized by the presence of better-informed investors, also called insiders. The literature in finance has so far concentrated on theoretical models describing such markets, in particular on the role played by the price in conveying information from informed to uninformed investors. However, the implications of these theories have not yet been incorporated into processing methods to extract information from past prices and this is the aim of this thesis. More specifically, the presence of a time-varying number of insiders induces a time-varying predictability in the price process, which calls for models that use a variable lookback window. Moreover, although our initial motivation comes from the study of markets under information asymmetry, the problem is more general, as it touches several issues in statistical modeling. The first one concerns the structure of the model. Existing methods use a fixed model structure despite evidences from data, which support an adaptive one. The second one concerns the improper handling of the nonstationarity in data. The stationarity assumption facilitates the mathematical treatment. Hence, existing methods relies on some form of stationarity, for example, by assuming local stationary, as in the windowing approach, or by modeling the underlying switching process, for example, with a Markov chain of order 1. However, these suffer from certain limitations and more advanced methods that take explicitly into account the nonstationariry of the signal are desirable. In summary, there is a need to develop a method that constantly monitors what is the appropriate structure, when a certain model works and when not or when are the underlying assumptions of the model violated. We verify our initial intuition in the context of experimental finance. In particular, we highlight the diffusion of information in the market. We give a precise definition to the notion of the time of maximally informative price and verify, in line with existing theories, that the time of maximally informative price is inversely proportional to the number of insiders in the market. This supports the idea of a variable lookback model. Then, we develop an estimation algorithm that selects simultaneously the order of the process and the lookback window based on the minimum description length principle. The algorithm maintains a series of estimators, each based on a different order and/or information set. The selection is based on an information theoretic criterion, that accounts for the ability of the model to fit the data, penalized by the model complexity and the amount of switching between models. Finally, we put the algorithm at work and build investment strategies. We devise a method to draw dynamically the trend line for the time-series of log-prices and propose an adaptive version of the well-known momentum strategy. The latter outperforms standard benchmarks, in particular during the 2009 momentum crash
    corecore