9,351 research outputs found
Algorithmic complexity of quantum capacity
Recently the theory of communication developed by Shannon has been extended
to the quantum realm by exploiting the rules of quantum theory. This latter
stems on complex vector spaces. However complex (as well as real) numbers are
just idealizations and they are not available in practice where we can only
deal with rational numbers. This fact naturally leads to the question of
whether the developed notions of capacities for quantum channels truly catch
their ability to transmit information. Here we answer this question for the
quantum capacity. To this end we resort to the notion of semi-computability in
order to approximately (by rational numbers) describe quantum states and
quantum channel maps. Then we introduce algorithmic entropies (like algorithmic
quantum coherent information) and derive relevant properties for them. Finally
we define algorithmic quantum capacity and prove that it equals the standard
one
Sequential Predictions based on Algorithmic Complexity
This paper studies sequence prediction based on the monotone Kolmogorov
complexity Km=-log m, i.e. based on universal deterministic/one-part MDL. m is
extremely close to Solomonoff's universal prior M, the latter being an
excellent predictor in deterministic as well as probabilistic environments,
where performance is measured in terms of convergence of posteriors or losses.
Despite this closeness to M, it is difficult to assess the prediction quality
of m, since little is known about the closeness of their posteriors, which are
the important quantities for prediction. We show that for deterministic
computable environments, the "posterior" and losses of m converge, but rapid
convergence could only be shown on-sequence; the off-sequence convergence can
be slow. In probabilistic environments, neither the posterior nor the losses
converge, in general.Comment: 26 pages, LaTe
Algorithmic Complexity of Real Financial Markets
A new approach to the understanding of the complex behavior of financial
markets index using tools from thermodynamics and statistical physics is
developed. Physical complexity, a magnitude rooted in the Kolmogorov-Chaitin
theory is applied to binary sequences built up from real time series of
financial markets indices. The study is based on NASDAQ and Mexican IPC data.
Different behaviors of this magnitude are shown when applied to the intervals
of series placed before crashes and in intervals when no financial turbulence
is observed. The connection between our results and The Efficient Market
Hypothesis is discussed.Comment: 13 pages, 4 figures, submitted to European Physical Journal
Algorithmic Complexity of Power Law Networks
It was experimentally observed that the majority of real-world networks
follow power law degree distribution. The aim of this paper is to study the
algorithmic complexity of such "typical" networks. The contribution of this
work is twofold.
First, we define a deterministic condition for checking whether a graph has a
power law degree distribution and experimentally validate it on real-world
networks. This definition allows us to derive interesting properties of power
law networks. We observe that for exponents of the degree distribution in the
range such networks exhibit double power law phenomenon that was
observed for several real-world networks. Our observation indicates that this
phenomenon could be explained by just pure graph theoretical properties.
The second aim of our work is to give a novel theoretical explanation why
many algorithms run faster on real-world data than what is predicted by
algorithmic worst-case analysis. We show how to exploit the power law degree
distribution to design faster algorithms for a number of classical P-time
problems including transitive closure, maximum matching, determinant, PageRank
and matrix inverse. Moreover, we deal with the problems of counting triangles
and finding maximum clique. Previously, it has been only shown that these
problems can be solved very efficiently on power law graphs when these graphs
are random, e.g., drawn at random from some distribution. However, it is
unclear how to relate such a theoretical analysis to real-world graphs, which
are fixed. Instead of that, we show that the randomness assumption can be
replaced with a simple condition on the degrees of adjacent vertices, which can
be used to obtain similar results. As a result, in some range of power law
exponents, we are able to solve the maximum clique problem in polynomial time,
although in general power law networks the problem is NP-complete
Algorithmic Complexity of Financial Motions
We survey the main applications of algorithmic (Kolmogorov) complexity to the problem of price dynamics in financial markets. We stress the differences between these works and put forward a general algorithmic framework in order to highlight its potential for financial data analysis. This framework is “general" in the sense that it is not constructed on the common assumption that price variations are predominantly stochastic in nature.algorithmic information theory; Kolmogorov complexity; financial returns; market efficiency; compression algorithms; information theory; randomness; price movements; algorithmic probability
Algorithmic complexity
The information content or complexity of an object can be measured by the length of its shortest description. For instance the string "01010101010101010101010101010101" has the short description "16 repetitions of 01", while "11001000011000011101111011101100" presumably has no simpler description other than writing down the string itself. More formally, the Algorithmic "Kolmogorov" Complexity (AC) of a string x is defined as the length of the shortest program that computes or outputs x , where the program is run on some fixed reference universal computer
- …