459 research outputs found

    Investment strategy due to the minimization of portfolio noise level by observations of coarse-grained entropy

    Full text link
    Using a recently developed method of noise level estimation that makes use of properties of the coarse grained-entropy we have analyzed the noise level for the Dow Jones index and a few stocks from the New York Stock Exchange. We have found that the noise level ranges from 40 to 80 percent of the signal variance. The condition of a minimal noise level has been applied to construct optimal portfolios from selected shares. We show that implementation of a corresponding threshold investment strategy leads to positive returns for historical data.Comment: 6 pages, 1 figure, 1 table, Proceedings of the conference APFA4. See http://www.chaosandnoise.or

    Anti-deterministic behavior of discrete systems that are less predictable than noise

    Full text link
    We present a new type of deterministic dynamical behaviour that is less predictable than white noise. We call it anti-deterministic (AD) because time series corresponding to the dynamics of such systems do not generate deterministic lines in Recurrence Plots for small thresholds. We show that although the dynamics is chaotic in the sense of exponential divergence of nearby initial conditions and although some properties of AD data are similar to white noise, the AD dynamics is in fact less predictable than noise and hence is different from pseudo-random number generators.Comment: 6 pages, 5 figures. See http://www.chaosandnoise.or

    Noise reduction in chaotic time series by a local projection with nonlinear constraints

    Full text link
    On the basis of a local-projective (LP) approach we develop a method of noise reduction in time series that makes use of nonlinear constraints appearing due to the deterministic character of the underlying dynamical system. The Delaunay triangulation approach is used to find the optimal nearest neighboring points in time series. The efficiency of our method is comparable to standard LP methods but our method is more robust to the input parameter estimation. The approach has been successfully applied for separating a signal from noise in the chaotic Henon and Lorenz models as well as for noisy experimental data obtained from an electronic Chua circuit. The method works properly for a mixture of additive and dynamical noise and can be used for the noise-level detection.Comment: 11 pages, 12 figures. See http://www.chaosandnoise.or

    Estimation of a Noise Level Using Coarse-Grained Entropy of Experimental Time Series of Internal Pressure in a Combustion Engine

    Full text link
    We report our results on non-periodic experimental time series of pressure in a single cylinder spark ignition engine. The experiments were performed for different levels of loading. We estimate the noise level in internal pressure calculating the coarse-grained entropy from variations of maximal pressures in successive cycles. The results show that the dynamics of the combustion is a nonlinear multidimensional process mediated by noise. Our results show that so defined level of noise in internal pressure is not monotonous function of loading.Comment: 12 pages, 6 figure

    How random is your heart beat?

    Get PDF
    We measure the content of random uncorrelated noise in heart rate variability using a general method of noise level estimation using a coarse grained entropy. We show that usually - except for atrial fibrillation - the level of such noise is within 5 - 15% of the variance of the data and that the variability due to the linearly correlated processes is dominant in all cases analysed but atrial fibrillation. The nonlinear deterministic content of heart rate variability remains significant and may not be ignored.Comment: see http://urbanowicz.org.p

    Risk evaluation with enhaced covariance matrix

    Get PDF
    We propose a route for the evaluation of risk based on a transformation of the covariance matrix. The approach uses a `potential' or `objective' function. This allows us to rescale data from different assets (or sources) such that each data set then has similar statistical properties in terms of their probability distributions. The method is tested using historical data from both the New York and Warsaw Stock Exchanges.Comment: see urbanowicz.org.p

    Automating biomedical data science through tree-based pipeline optimization

    Full text link
    Over the past decade, data science and machine learning has grown from a mysterious art form to a staple tool across a variety of fields in academia, business, and government. In this paper, we introduce the concept of tree-based pipeline optimization for automating one of the most tedious parts of machine learning---pipeline design. We implement a Tree-based Pipeline Optimization Tool (TPOT) and demonstrate its effectiveness on a series of simulated and real-world genetic data sets. In particular, we show that TPOT can build machine learning pipelines that achieve competitive classification accuracy and discover novel pipeline operators---such as synthetic feature constructors---that significantly improve classification accuracy on these data sets. We also highlight the current challenges to pipeline optimization, such as the tendency to produce pipelines that overfit the data, and suggest future research paths to overcome these challenges. As such, this work represents an early step toward fully automating machine learning pipeline design.Comment: 16 pages, 5 figures, to appear in EvoBIO 2016 proceeding
    corecore