2,322 research outputs found

    A Unified Approach to Attractor Reconstruction

    Full text link
    In the analysis of complex, nonlinear time series, scientists in a variety of disciplines have relied on a time delayed embedding of their data, i.e. attractor reconstruction. The process has focused primarily on heuristic and empirical arguments for selection of the key embedding parameters, delay and embedding dimension. This approach has left several long-standing, but common problems unresolved in which the standard approaches produce inferior results or give no guidance at all. We view the current reconstruction process as unnecessarily broken into separate problems. We propose an alternative approach that views the problem of choosing all embedding parameters as being one and the same problem addressable using a single statistical test formulated directly from the reconstruction theorems. This allows for varying time delays appropriate to the data and simultaneously helps decide on embedding dimension. A second new statistic, undersampling, acts as a check against overly long time delays and overly large embedding dimension. Our approach is more flexible than those currently used, but is more directly connected with the mathematical requirements of embedding. In addition, the statistics developed guide the user by allowing optimization and warning when embedding parameters are chosen beyond what the data can support. We demonstrate our approach on uni- and multivariate data, data possessing multiple time scales, and chaotic data. This unified approach resolves all the main issues in attractor reconstruction.Comment: 22 pages, revised version as submitted to CHAOS. Manuscript is currently under review. 4 Figures, 31 reference

    Few-Shot Single-View 3-D Object Reconstruction with Compositional Priors

    Full text link
    The impressive performance of deep convolutional neural networks in single-view 3D reconstruction suggests that these models perform non-trivial reasoning about the 3D structure of the output space. However, recent work has challenged this belief, showing that complex encoder-decoder architectures perform similarly to nearest-neighbor baselines or simple linear decoder models that exploit large amounts of per category data in standard benchmarks. On the other hand settings where 3D shape must be inferred for new categories with few examples are more natural and require models that generalize about shapes. In this work we demonstrate experimentally that naive baselines do not apply when the goal is to learn to reconstruct novel objects using very few examples, and that in a \emph{few-shot} learning setting, the network must learn concepts that can be applied to new categories, avoiding rote memorization. To address deficiencies in existing approaches to this problem, we propose three approaches that efficiently integrate a class prior into a 3D reconstruction model, allowing to account for intra-class variability and imposing an implicit compositional structure that the model should learn. Experiments on the popular ShapeNet database demonstrate that our method significantly outperform existing baselines on this task in the few-shot setting

    Learning to Embed Words in Context for Syntactic Tasks

    Full text link
    We present models for embedding words in the context of surrounding words. Such models, which we refer to as token embeddings, represent the characteristics of a word that are specific to a given context, such as word sense, syntactic category, and semantic role. We explore simple, efficient token embedding models based on standard neural network architectures. We learn token embeddings on a large amount of unannotated text and evaluate them as features for part-of-speech taggers and dependency parsers trained on much smaller amounts of annotated data. We find that predictors endowed with token embeddings consistently outperform baseline predictors across a range of context window and training set sizes.Comment: Accepted by ACL 2017 Repl4NLP worksho
    • …
    corecore