4,774 research outputs found

    Graphical Models: Modeling, Optimization, and Hilbert Space Embedding

    No full text
    Over the past two decades graphical models have been widely used as a powerful tool for compactly representing distributions. On the other hand, kernel methods have also been used extensively to come up with rich representations. This thesis aims to combine graphical models with kernels to produce compact models with rich representational abilities. The following four areas are our focus. 1. Conditional random fields for multi-agent reinforcement learning. Conditional random fields (CRFs) are graphical models for modeling the probability of labels given the observations. They have traditionally assumed that, conditioned on the training data, the label sequences of different training examples are independent and identically distributed (iid). We extended the use of CRFs to a class of temporal learning algorithms, namely policy gradient reinforcement learning (RL). Now the labels are no longer iid. They are actions that update the environment and affect the next observation. From an RL point of view, CRFs provide a natural way to model joint actions in a decentralized Markov decision process. Using tree sampling for inference, our experiment shows the RL methods employing CRFs clearly outperform those which do not model the proper joint policy. 2. Bayesian online multi-label classification. Gaussian density filtering provides fast and effective inference for graphical models (Maybeck, 1982). Based on it, we propose a Bayesian online multi-label classification (BOMC) framework which learns a probabilistic model of the linear classifier. The training labels are incorporated to update the posterior of the classifiers via a graphical model similar to TrueSkill (Herbrich et al, 2007). Using samples from the posterior, we label the test data by maximizing the expected F1-score. In our experiments, BOMC delivers significantly higher macro-averaged F1-score than the state-of-the-art online maximum margin learners. 3. Hilbert space embedment of distributions. Graphical models are also an essential tool in kernel measures of independence for non-iid data. Traditional information theory often requires density estimation, which makes it unideal for statistical estimation. Motivated by the fact that distributions often appear in machine learning via expectations, we can characterize the distance between distributions in terms of distances between means, especially means in reproducing kernel Hilbert spaces which are called kernel embeddings. Under this framework, the undirected graphical models further allow us to factorize the kernel embeddings onto cliques, which yields efficient measures of independence for non-iid data (Zhang et al, 2009). 4. Optimization in maximum margin models for structured data. Maximum margin estimation for structured data is an important task where graphical models also play a key role. They are special cases of regularized risk minimization, for which bundle methods (BMRM, Teo et al, 2007) are a state-of-the-art general purpose solver. Smola et al (2007) proved that BMRM requires O(1/epsilon) iterations to converge to an epsilon accurate solution, and we further show that this rate hits the lower bound. Motivated by (Nesterov 2003, 2005), we utilized the composite structure of the objective function and devised an algorithm for the structured loss which converges to an epsilon accurate solution in O(1/sqrt{epsilon}) iterations

    Large-Scale Kernel Methods for Independence Testing

    Get PDF
    Representations of probability measures in reproducing kernel Hilbert spaces provide a flexible framework for fully nonparametric hypothesis tests of independence, which can capture any type of departure from independence, including nonlinear associations and multivariate interactions. However, these approaches come with an at least quadratic computational cost in the number of observations, which can be prohibitive in many applications. Arguably, it is exactly in such large-scale datasets that capturing any type of dependence is of interest, so striking a favourable tradeoff between computational efficiency and test performance for kernel independence tests would have a direct impact on their applicability in practice. In this contribution, we provide an extensive study of the use of large-scale kernel approximations in the context of independence testing, contrasting block-based, Nystrom and random Fourier feature approaches. Through a variety of synthetic data experiments, it is demonstrated that our novel large scale methods give comparable performance with existing methods whilst using significantly less computation time and memory.Comment: 29 pages, 6 figure

    Shortcomings of a parametric VaR approach and nonparametric improvements based on a non-stationary return series model

    Get PDF
    A non-stationary regression model for financial returns is examined theoretically in this paper. Volatility dynamics are modelled both exogenously and deterministic, captured by a nonparametric curve estimation on equidistant centered returns. We prove consistency and asymptotic normality of a symmetric variance estimator and of a one-sided variance estimator analytically, and derive remarks on the bandwidth decision. Further attention is paid to asymmetry and heavy tails of the return distribution, implemented by an asymmetric version of the Pearson type VII distribution for random innovations. By providing a method of moments for its parameter estimation and a connection to the Student-t distribution we offer the framework for a factor-based VaR approach. The approximation quality of the non-stationary model is supported by simulation studies. --heteroscedastic asset returns,non-stationarity,nonparametric regression,volatility,innovation modelling,asymmetric heavy-tails,distributional forecast,Value at Risk (VaR)
    corecore