36 research outputs found

    On classification, ranking, and probability estimation

    Get PDF
    whose rankings are derived not from scores, but from a simple ranking of attribute values obtained from the training data. Although various metrics can be used, we show that by using the odds ratio to rank the attribute values we obtain a ranker that is conceptually close to the naive Bayes classifier, in the sense that for every instance of LexRank there exists an instance of naive Bayes that achieves the same ranking. However, the reverse is not true, which means that LexRank is more biased than naive Bayes. We systematically develop the relationships and differences between classification, ranking, and probability estimation, which leads to a novel connection between the Brier score and ROC curves. Combining LexRank with isotonic regression, which derives probability estimates from the ROC convex hull, results in the lexicographic probability estimator LexProb

    Learning Conditional Lexicographic Preference Trees

    Get PDF
    We introduce a generalization of lexicographic orders and argue that this generalization constitutes an interesting model class for preference learning in general and ranking in particular. We propose a learning algorithm for inducing a so-called conditional lexicographic preference tree from a given set of training data in the form of pairwise comparisons between objects. Experimentally, we validate our algorithm in the setting of multipartite ranking

    Sequential asset ranking in nonstationary time series

    Get PDF
    We extend the research into cross-sectional momentum trading strategies. Our main result is our novel ranking algorithm, the naive Bayes asset ranker (nbar), which we use to select subsets of assets to trade from the S&P 500 index. We perform feature representation transfer from radial basis function networks to a curds and whey (caw) multivariate regression model that takes advantage of the correlations between the response variables to improve predictive accuracy. The nbar ranks this regression output by forecasting the one-step-ahead sequential posterior probability that individual assets will be ranked higher than other portfolio constituents. Earlier algorithms, such as the weighted majority, deal with nonstationarity by ensuring the weights assigned to each expert never dip below a minimum threshold without ever increasing weights again. Our ranking algorithm allows experts who previously performed poorly to have increased weights when they start performing well. Our algorithm outperforms a strategy that would hold the long-only S&P 500 index with hindsight, despite the index appreciating by 205% during the test period. It also outperforms a regress-then-rank baseline, the caw model

    Archives of Data Science, Series A. Vol. 1,1: Special Issue: Selected Papers of the 3rd German-Polish Symposium on Data Analysis and Applications

    Get PDF
    The first volume of Archives of Data Science, Series A is a special issue of a selection of contributions which have been originally presented at the {\em 3rd Bilateral German-Polish Symposium on Data Analysis and Its Applications} (GPSDAA 2013). All selected papers fit into the emerging field of data science consisting of the mathematical sciences (computer science, mathematics, operations research, and statistics) and an application domain (e.g. marketing, biology, economics, engineering)

    Threshold Choice Methods: the Missing Link

    Full text link
    Many performance metrics have been introduced for the evaluation of classification performance, with different origins and niches of application: accuracy, macro-accuracy, area under the ROC curve, the ROC convex hull, the absolute error, and the Brier score (with its decomposition into refinement and calibration). One way of understanding the relation among some of these metrics is the use of variable operating conditions (either in the form of misclassification costs or class proportions). Thus, a metric may correspond to some expected loss over a range of operating conditions. One dimension for the analysis has been precisely the distribution we take for this range of operating conditions, leading to some important connections in the area of proper scoring rules. However, we show that there is another dimension which has not received attention in the analysis of performance metrics. This new dimension is given by the decision rule, which is typically implemented as a threshold choice method when using scoring models. In this paper, we explore many old and new threshold choice methods: fixed, score-uniform, score-driven, rate-driven and optimal, among others. By calculating the loss of these methods for a uniform range of operating conditions we get the 0-1 loss, the absolute error, the Brier score (mean squared error), the AUC and the refinement loss respectively. This provides a comprehensive view of performance metrics as well as a systematic approach to loss minimisation, namely: take a model, apply several threshold choice methods consistent with the information which is (and will be) available about the operating condition, and compare their expected losses. In order to assist in this procedure we also derive several connections between the aforementioned performance metrics, and we highlight the role of calibration in choosing the threshold choice method

    Online learning in financial time series

    Get PDF
    We wish to understand if additional learning forms can be combined with sequential optimisation to provide superior benefit over batch learning in various tasks operating in financial time series. In chapter 4, Online learning with radial basis function networks, we provide multi-horizon forecasts on the returns of financial time series. Our sequentially optimised radial basis function network (RBFNet) outperforms a random-walk baseline and several powerful supervised learners. Our RBFNets naturally measure the similarity between test samples and prototypes that capture the characteristics of the feature space. In chapter 5, Reinforcement learning for systematic FX trading, we perform feature representation transfer from an RBFNet to a direct, recurrent reinforcement learning (DRL) agent. Earlier academic work saw mixed results. We use better features, second-order optimisation methods and adapt our model parameters sequentially. As a result, our DRL agents cope better with statistical changes to the data distribution, achieving higher risk-adjusted returns than a funding and a momentum baseline. In chapter 6, The recurrent reinforcement learning crypto agent, we construct a digital assets trading agent that performs feature space representation transfer from an echo state network to a DRL agent. The agent learns to trade the XBTUSD perpetual swap contract on BitMEX. Our meta-model can process data as a stream and learn sequentially; this helps it cope with the nonstationary environment. In chapter 7, Sequential asset ranking in nonstationary time series, we create an online learning long/short portfolio selection algorithm that can detect the best and worst performing portfolio constituents that change over time; in particular, we successfully handle the higher transaction costs associated with using daily-sampled data, and achieve higher total and risk-adjusted returns than the long-only holding of the S&P 500 index with hindsight

    Learning Lexicographic Preference Trees From Positive Examples

    Get PDF
    This paper considers the task of learning the preferences of users on a combinatorial set of alternatives, as it can be the case for example with online configurators. In many settings, what is available to the learner is a set of positive examples of alternatives that have been selected during past interactions. We propose to learn a model of the users' preferences that ranks previously chosen alternatives as high as possible. In this paper, we study the particular task of learning conditional lexicographic preferences. We present an algorithm to learn several classes of lexicographic preference trees, prove convergence properties of the algorithm, and experiment on both synthetic data and on a real-world bench in the domain of recommendation in interactive configuration
    corecore