140,139 research outputs found
Algorithmic Complexity of Financial Motions
We survey the main applications of algorithmic (Kolmogorov) complexity to the problem of price dynamics in financial markets. We stress the differences between these works and put forward a general algorithmic framework in order to highlight its potential for financial data analysis. This framework is “general" in the sense that it is not constructed on the common assumption that price variations are predominantly stochastic in nature.algorithmic information theory; Kolmogorov complexity; financial returns; market efficiency; compression algorithms; information theory; randomness; price movements; algorithmic probability
Around Kolmogorov complexity: basic notions and results
Algorithmic information theory studies description complexity and randomness
and is now a well known field of theoretical computer science and mathematical
logic. There are several textbooks and monographs devoted to this theory where
one can find the detailed exposition of many difficult results as well as
historical references. However, it seems that a short survey of its basic
notions and main results relating these notions to each other, is missing.
This report attempts to fill this gap and covers the basic notions of
algorithmic information theory: Kolmogorov complexity (plain, conditional,
prefix), Solomonoff universal a priori probability, notions of randomness
(Martin-L\"of randomness, Mises--Church randomness), effective Hausdorff
dimension. We prove their basic properties (symmetry of information, connection
between a priori probability and prefix complexity, criterion of randomness in
terms of complexity, complexity characterization for effective dimension) and
show some applications (incompressibility method in computational complexity
theory, incompleteness theorems). It is based on the lecture notes of a course
at Uppsala University given by the author
Informational Substitutes
We propose definitions of substitutes and complements for pieces of
information ("signals") in the context of a decision or optimization problem,
with game-theoretic and algorithmic applications. In a game-theoretic context,
substitutes capture diminishing marginal value of information to a rational
decision maker. We use the definitions to address the question of how and when
information is aggregated in prediction markets. Substitutes characterize
"best-possible" equilibria with immediate information aggregation, while
complements characterize "worst-possible", delayed aggregation. Game-theoretic
applications also include settings such as crowdsourcing contests and Q\&A
forums. In an algorithmic context, where substitutes capture diminishing
marginal improvement of information to an optimization problem, substitutes
imply efficient approximation algorithms for a very general class of (adaptive)
information acquisition problems.
In tandem with these broad applications, we examine the structure and design
of informational substitutes and complements. They have equivalent, intuitive
definitions from disparate perspectives: submodularity, geometry, and
information theory. We also consider the design of scoring rules or
optimization problems so as to encourage substitutability or complementarity,
with positive and negative results. Taken as a whole, the results give some
evidence that, in parallel with substitutable items, informational substitutes
play a natural conceptual and formal role in game theory and algorithms.Comment: Full version of FOCS 2016 paper. Single-column, 61 pages (48 main
text, 13 references and appendix
Preparation information and optimal decompositions for mixed quantum states
Consider a joint quantum state of a system and its environment. A measurement
on the environment induces a decomposition of the system state. Using
algorithmic information theory, we define the preparation information of a pure
or mixed state in a given decomposition. We then define an optimal
decomposition as a decomposition for which the average preparation information
is minimal. The average preparation information for an optimal decomposition
characterizes the system-environment correlations. We discuss properties and
applications of the concepts introduced above and give several examples.Comment: 13 pages, latex, 2 postscript figure
Algorithmic information theory
This article is a brief guide to the field of algorithmic information theory (AIT), its underlying philosophy, and the most important concepts. AIT arises by mixing information theory and computation theory to obtain an objective and absolute notion of information in an individual object, and in so doing gives rise to an objective and robust notion of randomness of individual objects. This is in contrast to classical information theory that is based on random variables and communication, and has no bearing on information and randomness of individual objects. After a brief overview, the major subfields, applications, history, and a map of the field are presented
Decomposition Complexity
We consider a problem of decomposition of a ternary function into a
composition of binary ones from the viewpoint of communication complexity and
algorithmic information theory as well as some applications to cellular
automata.Comment: Journ\'ees Automates Cellulaires 2010, Turku : Finland (2010
Dagstuhl Reports : Volume 1, Issue 2, February 2011
Online Privacy: Towards Informational Self-Determination on the Internet (Dagstuhl Perspectives Workshop 11061) : Simone Fischer-Hübner, Chris Hoofnagle, Kai Rannenberg, Michael Waidner, Ioannis Krontiris and Michael Marhöfer Self-Repairing Programs (Dagstuhl Seminar 11062) : Mauro Pezzé, Martin C. Rinard, Westley Weimer and Andreas Zeller Theory and Applications of Graph Searching Problems (Dagstuhl Seminar 11071) : Fedor V. Fomin, Pierre Fraigniaud, Stephan Kreutzer and Dimitrios M. Thilikos Combinatorial and Algorithmic Aspects of Sequence Processing (Dagstuhl Seminar 11081) : Maxime Crochemore, Lila Kari, Mehryar Mohri and Dirk Nowotka Packing and Scheduling Algorithms for Information and Communication Services (Dagstuhl Seminar 11091) Klaus Jansen, Claire Mathieu, Hadas Shachnai and Neal E. Youn
Expanding the Algorithmic Information Theory Frame for Applications to Earth Observation
Recent years have witnessed an increased interest towards compression-based methods and their applications to remote sensing, as these have a data-driven and parameter-free approach and can be thus succesfully employed in several applications, especially in image information mining. This paper expands the algorithmic information theory frame, on which these methods are based. On the one hand, algorithms originally defined in the pattern matching domain are reformulated, allowing a better understanding of the available compression-based tools for remote sensing applications. On the other hand, the use of existing compression algorithms is proposed to store satellite images with added semantic value
CoSaMP: Iterative signal recovery from incomplete and inaccurate samples
Compressive sampling offers a new paradigm for acquiring signals that are
compressible with respect to an orthonormal basis. The major algorithmic
challenge in compressive sampling is to approximate a compressible signal from
noisy samples. This paper describes a new iterative recovery algorithm called
CoSaMP that delivers the same guarantees as the best optimization-based
approaches. Moreover, this algorithm offers rigorous bounds on computational
cost and storage. It is likely to be extremely efficient for practical problems
because it requires only matrix-vector multiplies with the sampling matrix. For
many cases of interest, the running time is just O(N*log^2(N)), where N is the
length of the signal.Comment: 30 pages. Revised. Presented at Information Theory and Applications,
31 January 2008, San Dieg
- …