322 research outputs found
Predictability: a way to characterize Complexity
Different aspects of the predictability problem in dynamical systems are
reviewed. The deep relation among Lyapunov exponents, Kolmogorov-Sinai entropy,
Shannon entropy and algorithmic complexity is discussed. In particular, we
emphasize how a characterization of the unpredictability of a system gives a
measure of its complexity. Adopting this point of view, we review some
developments in the characterization of the predictability of systems showing
different kind of complexity: from low-dimensional systems to high-dimensional
ones with spatio-temporal chaos and to fully developed turbulence. A special
attention is devoted to finite-time and finite-resolution effects on
predictability, which can be accounted with suitable generalization of the
standard indicators. The problems involved in systems with intrinsic randomness
is discussed, with emphasis on the important problems of distinguishing chaos
from noise and of modeling the system. The characterization of irregular
behavior in systems with discrete phase space is also considered.Comment: 142 Latex pgs. 41 included eps figures, submitted to Physics Reports.
Related information at this http://axtnt2.phys.uniroma1.i
A Pseudo Random Numbers Generator Based on Chaotic Iterations. Application to Watermarking
In this paper, a new chaotic pseudo-random number generator (PRNG) is
proposed. It combines the well-known ISAAC and XORshift generators with chaotic
iterations. This PRNG possesses important properties of topological chaos and
can successfully pass NIST and TestU01 batteries of tests. This makes our
generator suitable for information security applications like cryptography. As
an illustrative example, an application in the field of watermarking is
presented.Comment: 11 pages, 7 figures, In WISM 2010, Int. Conf. on Web Information
Systems and Mining, volume 6318 of LNCS, Sanya, China, pages 202--211,
October 201
Mixing Bandt-Pompe and Lempel-Ziv approaches: another way to analyze the complexity of continuous-states sequences
In this paper, we propose to mix the approach underlying Bandt-Pompe
permutation entropy with Lempel-Ziv complexity, to design what we call
Lempel-Ziv permutation complexity. The principle consists of two steps: (i)
transformation of a continuous-state series that is intrinsically multivariate
or arises from embedding into a sequence of permutation vectors, where the
components are the positions of the components of the initial vector when
re-arranged; (ii) performing the Lempel-Ziv complexity for this series of
`symbols', as part of a discrete finite-size alphabet. On the one hand, the
permutation entropy of Bandt-Pompe aims at the study of the entropy of such a
sequence; i.e., the entropy of patterns in a sequence (e.g., local increases or
decreases). On the other hand, the Lempel-Ziv complexity of a discrete-state
sequence aims at the study of the temporal organization of the symbols (i.e.,
the rate of compressibility of the sequence). Thus, the Lempel-Ziv permutation
complexity aims to take advantage of both of these methods. The potential from
such a combined approach - of a permutation procedure and a complexity analysis
- is evaluated through the illustration of some simulated data and some real
data. In both cases, we compare the individual approaches and the combined
approach.Comment: 30 pages, 4 figure
Research on digital image watermark encryption based on hyperchaos
The digital watermarking technique embeds meaningful information into one or more watermark images hidden in one image, in which it is known as a secret carrier. It is difficult for a hacker to extract or remove any hidden watermark from an image, and especially to crack so called digital watermark. The combination of digital watermarking technique and traditional image encryption technique is able to greatly improve anti-hacking capability, which suggests it is a good method for keeping the integrity of the original image. The research works contained in this thesis include: (1)A literature review the hyperchaotic watermarking technique is relatively more advantageous, and becomes the main subject in this programme. (2)The theoretical foundation of watermarking technologies, including the human visual system (HVS), the colour space transform, discrete wavelet transform (DWT), the main watermark embedding algorithms, and the mainstream methods for improving watermark robustness and for evaluating watermark embedding performance. (3) The devised hyperchaotic scrambling technique it has been applied to colour image watermark that helps to improve the image encryption and anti-cracking capabilities. The experiments in this research prove the robustness and some other advantages of the invented technique. This thesis focuses on combining the chaotic scrambling and wavelet watermark embedding to achieve a hyperchaotic digital watermark to encrypt digital products, with the human visual system (HVS) and other factors taken into account. This research is of significant importance and has industrial application value
Noise Optimizes Super-Turing Computation In Recurrent Neural Networks
This paper explores the benefit of added noise in increasing the computational complexity of digital recurrent neural networks (RNNs). The physically accepted model of the universe imposes rational number, stochastic limits on all calculations. An analog RNN with those limits calculates at the super-Turing complexity level BPP/logâ. In this paper, we demonstrate how noise aids digital RNNs in attaining super-Turing operation similar to analog RNNs. We investigate moving limited-precision systems from not being chaotic at small amounts of noise, through consistency with chaos, to overwhelming it at large amounts of noise. A Kolmogorov-complexity-based proof shows that an infinite computational class hierarchy exists between P, the Turing class, and BPP/logâ. The hierarchy offers a possibility that the noise-enhanced digital RNNs could operate at a super-Turing level less complex than BPP/logâ. As the uniform noise increases, the digital RNNs develop positive Lyapunov exponents intimating that chaos is mimicked. The exponents maximize to the accepted values for the logistic and HĂ©non maps when the noise equals eight times the least significant bit of the noisy recurrent signals for the logistic digital RNN and four times the HĂ©non digital RNN
How neural networks learn to classify chaotic time series
We tackle the outstanding issue of analyzing the inner workings of neural networks trained to classify regular-vs-chaotic time series. This setting, well-studied in dynamical systems, enables thorough formal analyses. We focus specifically on a family of networks dubbed large Kernel convolutional neural networks (LKCNNs), recently introduced by BoullĂ© et al. [403, 132261 (2021)]. These non-recursive networks have been shown to outperform other established architectures (e.g., residual networks, shallow neural networks, and fully convolutional networks) at this classification task. Furthermore, they outperform âmanualâ classification approaches based on direct reconstruction of the Lyapunov exponent. We find that LKCNNs use qualitative properties of the input sequence. We show that LKCNN models trained from random weight initialization, end in two most common performance groups: one with relatively low performance (â 0.72 average classification accuracy) and one with high classification performance (â 0.94 average classification accuracy). Notably, the models in the low performance class display periodic activations that are qualitatively similar to those exhibited by LKCNNs with random weights. This could give very general criteria for identifying, a priori, trained weights that yield poor accuracy
How neural networks learn to classify chaotic time series
We tackle the outstanding issue of analyzing the inner workings of neural networks trained to classify regular-vs-chaotic time series. This setting, well-studied in dynamical systems, enables thorough formal analyses. We focus specifically on a family of networks dubbed large Kernel convolutional neural networks (LKCNNs), recently introduced by BoullĂ© et al. [403, 132261 (2021)]. These non-recursive networks have been shown to outperform other established architectures (e.g., residual networks, shallow neural networks, and fully convolutional networks) at this classification task. Furthermore, they outperform âmanualâ classification approaches based on direct reconstruction of the Lyapunov exponent. We find that LKCNNs use qualitative properties of the input sequence. We show that LKCNN models trained from random weight initialization, end in two most common performance groups: one with relatively low performance (â 0.72 average classification accuracy) and one with high classification performance (â 0.94 average classification accuracy). Notably, the models in the low performance class display periodic activations that are qualitatively similar to those exhibited by LKCNNs with random weights. This could give very general criteria for identifying, a priori, trained weights that yield poor accuracy
- âŠ