206,178 research outputs found

    SOM-VAE: Interpretable Discrete Representation Learning on Time Series

    Full text link
    High-dimensional time series are common in many domains. Since human cognition is not optimized to work well in high-dimensional spaces, these areas could benefit from interpretable low-dimensional representations. However, most representation learning algorithms for time series data are difficult to interpret. This is due to non-intuitive mappings from data features to salient properties of the representation and non-smoothness over time. To address this problem, we propose a new representation learning framework building on ideas from interpretable discrete dimensionality reduction and deep generative modeling. This framework allows us to learn discrete representations of time series, which give rise to smooth and interpretable embeddings with superior clustering performance. We introduce a new way to overcome the non-differentiability in discrete representation learning and present a gradient-based version of the traditional self-organizing map algorithm that is more performant than the original. Furthermore, to allow for a probabilistic interpretation of our method, we integrate a Markov model in the representation space. This model uncovers the temporal transition structure, improves clustering performance even further and provides additional explanatory insights as well as a natural representation of uncertainty. We evaluate our model in terms of clustering performance and interpretability on static (Fashion-)MNIST data, a time series of linearly interpolated (Fashion-)MNIST images, a chaotic Lorenz attractor system with two macro states, as well as on a challenging real world medical time series application on the eICU data set. Our learned representations compare favorably with competitor methods and facilitate downstream tasks on the real world data.Comment: Accepted for publication at the Seventh International Conference on Learning Representations (ICLR 2019

    Robust Kalman Filtering: Asymptotic Analysis of the Least Favorable Model

    Full text link
    We consider a robust filtering problem where the robust filter is designed according to the least favorable model belonging to a ball about the nominal model. In this approach, the ball radius specifies the modeling error tolerance and the least favorable model is computed by performing a Riccati-like backward recursion. We show that this recursion converges provided that the tolerance is sufficiently small

    Projective system approach to the martingale characterization of the absence of arbitrage

    Get PDF
    The equivalence between the absence of arbitrage and the existence of an equivalent martingale measure fails when an infinite number of trading dates is considered. By enlarging the set of states of nature and the probability measure through a projective system of topological spaces and Radon measures, we characterize the absence of arbitrage when the time set is countable

    An Efficient Search Strategy for Aggregation and Discretization of Attributes of Bayesian Networks Using Minimum Description Length

    Full text link
    Bayesian networks are convenient graphical expressions for high dimensional probability distributions representing complex relationships between a large number of random variables. They have been employed extensively in areas such as bioinformatics, artificial intelligence, diagnosis, and risk management. The recovery of the structure of a network from data is of prime importance for the purposes of modeling, analysis, and prediction. Most recovery algorithms in the literature assume either discrete of continuous but Gaussian data. For general continuous data, discretization is usually employed but often destroys the very structure one is out to recover. Friedman and Goldszmidt suggest an approach based on the minimum description length principle that chooses a discretization which preserves the information in the original data set, however it is one which is difficult, if not impossible, to implement for even moderately sized networks. In this paper we provide an extremely efficient search strategy which allows one to use the Friedman and Goldszmidt discretization in practice

    Option Pricing with Delayed Information

    Full text link
    We propose a model to study the effects of delayed information on option pricing. We first talk about the absence of arbitrage in our model, and then discuss super replication with delayed information in a binomial model, notably, we present a closed form formula for the price of convex contingent claims. Also, we address the convergence problem as the time-step and delay length tend to zero and introduce analogous results in the continuous time framework. Finally, we explore how delayed information exaggerates the volatility smile
    corecore