3 research outputs found

    Geometric Fusion via Joint Delay Embeddings

    Full text link
    We introduce geometric and topological methods to develop a new framework for fusing multi-sensor time series. This framework consists of two steps: (1) a joint delay embedding, which reconstructs a high-dimensional state space in which our sensors correspond to observation functions, and (2) a simple orthogonalization scheme, which accounts for tangencies between such observation functions, and produces a more diversified geometry on the embedding space. We conclude with some synthetic and real-world experiments demonstrating that our framework outperforms traditional metric fusion methods

    Graph Spectral Embedding for Parsimonious Transmission of Multivariate Time Series

    Full text link
    We propose a graph spectral representation of time series data that 1) is parsimoniously encoded to user-demanded resolution; 2) is unsupervised and performant in data-constrained scenarios; 3) captures event and event-transition structure within the time series; and 4) has near-linear computational complexity in both signal length and ambient dimension. This representation, which we call Laplacian Events Signal Segmentation (LESS), can be computed on time series of arbitrary dimension and originating from sensors of arbitrary type. Hence, time series originating from sensors of heterogeneous type can be compressed to levels demanded by constrained-communication environments, before being fused at a common center. Temporal dynamics of the data is summarized without explicit partitioning or probabilistic modeling. As a proof-of-principle, we apply this technique on high dimensional wavelet coefficients computed from the Free Spoken Digit Dataset to generate a memory efficient representation that is interpretable. Due to its unsupervised and non-parametric nature, LESS representations remain performant in the digit classification task despite the absence of labels and limited data

    High-Dimensional Data Fusion via Joint Manifold Learning

    No full text
    The emergence of low-cost sensing architectures for diverse modalities has made it possible to deploy sensor networks that acquire large amounts of very high-dimensional data. To cope with such a data deluge, manifold models are often developed that provide a powerful theoretical and algorithmic framework for capturing the intrinsic structure of data governed by a low-dimensional set of parameters.However, these models do not typically take into account dependencies among multiple sensors. We thus propose a new joint manifold framework for data ensembles that exploits such dependencies. We show that joint manifold structure can lead to improved performance for manifold learning. Additionally, we leverage recent results concerning random projections of manifolds to formulate a universal, network-scalable dimensionality reduction scheme that efficiently fuses the data from all sensors
    corecore