24,240 research outputs found

    Optimal embedding parameters: A modelling paradigm

    Full text link
    Reconstruction of a dynamical system from a time series requires the selection of two parameters, the embedding dimension ded_e and the embedding lag τ\tau. Many competing criteria to select these parameters exist, and all are heuristic. Within the context of modeling the evolution operator of the underlying dynamical system, we show that one only need be concerned with the product deτd_e\tau. We introduce an information theoretic criteria for the optimal selection of the embedding window dw=deτd_w=d_e\tau. For infinitely long time series this method is equivalent to selecting the embedding lag that minimises the nonlinear model prediction error. For short and noisy time series we find that the results of this new algorithm are data dependent and superior to estimation of embedding parameters with the standard techniques

    Learning by stochastic serializations

    Full text link
    Complex structures are typical in machine learning. Tailoring learning algorithms for every structure requires an effort that may be saved by defining a generic learning procedure adaptive to any complex structure. In this paper, we propose to map any complex structure onto a generic form, called serialization, over which we can apply any sequence-based density estimator. We then show how to transfer the learned density back onto the space of original structures. To expose the learning procedure to the structural particularities of the original structures, we take care that the serializations reflect accurately the structures' properties. Enumerating all serializations is infeasible. We propose an effective way to sample representative serializations from the complete set of serializations which preserves the statistics of the complete set. Our method is competitive or better than state of the art learning algorithms that have been specifically designed for given structures. In addition, since the serialization involves sampling from a combinatorial process it provides considerable protection from overfitting, which we clearly demonstrate on a number of experiments.Comment: Submission to NeurIPS 201
    corecore