276 research outputs found
Short-term time series prediction using Hilbert space embeddings of autoregressive processes
Linear autoregressive models serve as basic representations of discrete time stochastic processes. Different attempts have been made to provide non-linear versions of the basic autoregressive process, including different versions based on kernel methods. Motivated by the powerful framework of Hilbert space embeddings of distributions, in this paper we apply this methodology for the kernel embedding of an autoregressive process of order p. By doing so, we provide a non-linear version of an autoregressive process, that shows increased performance over the linear model in highly complex time series. We use the method proposed for one-step ahead forecasting of different time-series, and compare its performance against other non-linear methods
Short-term time series prediction using Hilbert space embeddings of autoregressive processes
Linear autoregressive models serve as basic representations of discrete time stochastic processes. Different attempts have been made to provide non-linear versions of the basic autoregressive process, including different versions based on kernel methods. Motivated by the powerful framework of Hilbert space embeddings of distributions, in this paper we apply this methodology for the kernel embedding of an autoregressive process of order p. By doing so, we provide a non-linear version of an autoregressive process, that shows increased performance over the linear model in highly complex time series. We use the method proposed for one-step ahead forecasting of different time-series, and compare its performance against other non-linear methods
Time Series Cluster Kernel for Learning Similarities between Multivariate Time Series with Missing Data
Similarity-based approaches represent a promising direction for time series
analysis. However, many such methods rely on parameter tuning, and some have
shortcomings if the time series are multivariate (MTS), due to dependencies
between attributes, or the time series contain missing data. In this paper, we
address these challenges within the powerful context of kernel methods by
proposing the robust \emph{time series cluster kernel} (TCK). The approach
taken leverages the missing data handling properties of Gaussian mixture models
(GMM) augmented with informative prior distributions. An ensemble learning
approach is exploited to ensure robustness to parameters by combining the
clustering results of many GMM to form the final kernel.
We evaluate the TCK on synthetic and real data and compare to other
state-of-the-art techniques. The experimental results demonstrate that the TCK
is robust to parameter choices, provides competitive results for MTS without
missing data and outstanding results for missing data.Comment: 23 pages, 6 figure
A Primer on Reproducing Kernel Hilbert Spaces
Reproducing kernel Hilbert spaces are elucidated without assuming prior
familiarity with Hilbert spaces. Compared with extant pedagogic material,
greater care is placed on motivating the definition of reproducing kernel
Hilbert spaces and explaining when and why these spaces are efficacious. The
novel viewpoint is that reproducing kernel Hilbert space theory studies
extrinsic geometry, associating with each geometric configuration a canonical
overdetermined coordinate system. This coordinate system varies continuously
with changing geometric configurations, making it well-suited for studying
problems whose solutions also vary continuously with changing geometry. This
primer can also serve as an introduction to infinite-dimensional linear algebra
because reproducing kernel Hilbert spaces have more properties in common with
Euclidean spaces than do more general Hilbert spaces.Comment: Revised version submitted to Foundations and Trends in Signal
Processin
A kernel-based embedding framework for high-dimensional data analysis
The world is essentially multidimensional, e.g., neurons, computer networks, Internet traffic, and financial markets. The challenge is to discover and extract information that lies hidden in these high-dimensional datasets to support classification, regression, clustering, and visualization tasks. As a result, dimensionality reduction aims to provide a faithful representation of data in a low-dimensional space. This removes noise and redundant features, which is useful to understand and visualize the structure of complex datasets. The focus of this work is the analysis of high-dimensional data to support regression tasks and exploratory data analysis in real-world scenarios. Firstly, we propose an online framework to predict longterm future behavior of time-series. Secondly, we propose a new dimensionality reduction method to preserve the significant structure of high-dimensional data in a low-dimensional space. Lastly, we propose an sparsification strategy based on dimensionality reduction to avoid overfitting and reduce computational complexity in online applicationsEl mundo es esencialmente multidimensional, por ejemplo, neuronas, redes computacionales, tráfico de internet y los mercados financieros. El desafío es descubrir y extraer información que permanece oculta en estos conjuntos de datos de alta dimensión para apoyar tareas de clasificación, regresión, agrupamiento y visualización. Como resultado de ello, los métodos de reducción de dimensión pretenden suministrar una fiel representación de los datos en un espacio de baja dimensión. Esto permite eliminar ruido y características redundantes, lo que es útil para entender y visualizar la estructura de conjuntos de datos complejos. Este trabajo se enfoca en el análisis de datos de alta dimensión para apoyar tareas de regresión y el análisis exploratorio de datos en escenarios del mundo real. En primer lugar, proponemos un marco para la predicción del comportamiento a largo plazo de series de tiempo. En segundo lugar, se propone un nuevo método de reducción de dimensión para preservar la estructura significativa de datos de alta dimensión en un espacio de baja dimensión. Finalmente, proponemos una estrategia de esparsificacion que utiliza reducción de dimensional dad para evitar sobre ajuste y reducir la complejidad computacional de aplicaciones en líneaDoctorad
Memory-Based Reduced Modelling and Data-Based Estimation of Opinion Spreading
We investigate opinion dynamics based on an agent-based model and are interested in predicting the evolution of the percentages of the entire agent population that share an opinion. Since these opinion percentages can be seen as an aggregated observation of the full system state, the individual opinions of each agent, we view this in the framework of the Mori-Zwanzig projection formalism. More specifically, we show how to estimate a nonlinear autoregressive model (NAR) with memory from data given by a time series of opinion percentages, and discuss its prediction capacities for various specific topologies of the agent interaction network. We demonstrate that the inclusion of memory terms significantly improves the prediction quality on examples with different network topologies
- …