49,575 research outputs found

    Computing K-Trivial Sets by Incomplete Random Sets

    Full text link
    Every K-trivial set is computable from an incomplete Martin-L\"of random set, i.e., a Martin-L\"of random set that does not compute 0'

    Martin-L\"of reducibility and cost functions

    Full text link
    Martin-L\"of (ML)-reducibility compares KK-trivial sets by examining the Martin-L\"of random sequences that compute them. We show that every KK-trivial set is computable from a c.e.\ set of the same ML-degree. We investigate the interplay between ML-reducibility and cost functions, which are used to both measure the number of changes in a computable approximation, and the type of null sets used to capture ML-random sequences. We show that for every cost function there is a c.e.\ set ML-above the sets obeying it (called an ML-complete set for the cost function). We characterise the KK-trivial sets computable from a fragment of the left-c.e.\ random real~Ω\Omega. This leads to a new characterisation of strong jump-traceability

    Computing from projections of random points: a dense hierarchy of subideals of the KK-trivial degrees

    Full text link
    We study the sets that are computable from both halves of some (Martin-L\"of) random sequence, which we call \emph{1/21/2-bases}. We show that the collection of such sets forms an ideal in the Turing degrees that is generated by its c.e.\ elements. It is a proper subideal of the KK-trivial sets. We characterise 1/21/2-bases as the sets computable from both halves of Chaitin's Ω\Omega, and as the sets that obey the cost function c(x,s)=Ωs−Ωx\mathbf c(x,s) = \sqrt{\Omega_s - \Omega_x}. Generalising these results yields a dense hierarchy of subideals in the KK-trivial degrees: For k<nk< n, let Bk/nB_{k/n} be the collection of sets that are below any kk out of nn columns of some random sequence. As before, this is an ideal generated by its c.e.\ elements and the random sequence in the definition can always be taken to be Ω\Omega. Furthermore, the corresponding cost function characterisation reveals that Bk/nB_{k/n} is independent of the particular representation of the rational k/nk/n, and that BpB_p is properly contained in BqB_q for rational numbers p<qp< q. These results are proved using a generalisation of the Loomis--Whitney inequality, which bounds the measure of an open set in terms of the measures of its projections. The generality allows us to analyse arbitrary families of orthogonal projections. As it turns out, these do not give us new subideals of the KK-trivial sets, we can calculate from the family which BpB_p it characterises. We finish by showing that the the union of BpB_p for p<1p<1 is the collection of sets which are robustly computable from a random, a class previously studied by Hirschfeldt, Jockusch, Kuyper, and Schupp

    Density, forcing, and the covering problem

    Full text link
    We present a notion of forcing that can be used, in conjunction with other results, to show that there is a Martin-L\"of random set X such that X does not compute 0' and X computes every K-trivial set

    Computing strategies for achieving acceptability

    Full text link
    We consider a trader who wants to direct his portfolio towards a set of acceptable wealths given by a convex risk measure. We propose a black-box algorithm, whose inputs are the joint law of stock prices and the convex risk measure, and whose outputs are the numerical values of initial capital requirement and the functional form of a trading strategy to achieve acceptability. We also prove optimality of the obtained capital.Comment: 17 page

    Calculus of Cost Functions

    Full text link
    Cost functions provide a framework for constructions of sets Turing below the halting problem that are close to computable. We carry out a systematic study of cost functions. We relate their algebraic properties to their expressive strength. We show that the class of additive cost functions describes the KK-trivial sets. We prove a cost function basis theorem, and give a general construction for building computably enumerable sets that are close to being Turing complete. This works dates from 2010 and was submitted in 2013 to the long-delayed volume "The Incomputable" arising from the 2012 Cambridge Turing year

    Aspects of Chaitin's Omega

    Full text link
    The halting probability of a Turing machine,also known as Chaitin's Omega, is an algorithmically random number with many interesting properties. Since Chaitin's seminal work, many popular expositions have appeared, mainly focusing on the metamathematical or philosophical significance of Omega (or debating against it). At the same time, a rich mathematical theory exploring the properties of Chaitin's Omega has been brewing in various technical papers, which quietly reveals the significance of this number to many aspects of contemporary algorithmic information theory. The purpose of this survey is to expose these developments and tell a story about Omega, which outlines its multifaceted mathematical properties and roles in algorithmic randomness

    Certainty Equivalent and Utility Indifference Pricing for Incomplete Preferences via Convex Vector Optimization

    Full text link
    For incomplete preference relations that are represented by multiple priors and/or multiple -- possibly multivariate -- utility functions, we define a certainty equivalent as well as the utility buy and sell prices and indifference price bounds as set-valued functions of the claim. Furthermore, we motivate and introduce the notion of a weak and a strong certainty equivalent. We will show that our definitions contain as special cases some definitions found in the literature so far on complete or special incomplete preferences. We prove monotonicity and convexity properties of utility buy and sell prices that hold in total analogy to the properties of the scalar indifference prices for complete preferences. We show how the (weak and strong) set-valued certainty equivalent as well as the indifference price bounds can be computed or approximated by solving convex vector optimization problems. Numerical examples and their economic interpretations are given for the univariate as well as for the multivariate case

    The Communication Cost of Information Spreading in Dynamic Networks

    Full text link
    This paper investigates the message complexity of distributed information spreading (a.k.a gossip or token dissemination) in adversarial dynamic networks, where the goal is to spread kk tokens of information to every node on an nn-node network. We consider the amortized (average) message complexity of spreading a token, assuming that the number of tokens is large. Our focus is on token-forwarding algorithms, which do not manipulate tokens in any way other than storing, copying, and forwarding them. We consider two types of adversaries that arbitrarily rewire the network while keeping it connected: the adaptive adversary that is aware of the status of all the nodes and the algorithm (including the current random choices), and the oblivious adversary that is oblivious to the random choices made by the algorithm. The central question that motivates our work is whether one can achieve subquadratic amortized message complexity for information spreading. We present two sets of results depending on how nodes send messages to their neighbors: (1) Local broadcast: We show a tight lower bound of Ω(n2)\Omega(n^2) on the number of amortized local broadcasts, which is matched by the naive flooding algorithm, (2) Unicast: We study the message complexity as a function of the number of dynamic changes in the network. To facilitate this, we introduce a natural complexity measure for analyzing dynamic networks called adversary-competitive message complexity where the adversary pays a unit cost for every topological change. Under this model, it is shown that if kk is sufficiently large, we can obtain an optimal amortized message complexity of O(n)O(n). We also present a randomized algorithm that achieves subquadratic amortized message complexity when the number of tokens is not large under an oblivious adversary

    Topological Analysis of Syntactic Structures

    Get PDF
    We use the persistent homology method of topological data analysis and dimensional analysis techniques to study data of syntactic structures of world languages. We analyze relations between syntactic parameters in terms of dimensionality, of hierarchical clustering structures, and of non-trivial loops. We show there are relations that hold across language families and additional relations that are family-specific. We then analyze the trees describing the merging structure of persistent connected components for languages in different language families and we show that they partly correlate to historical phylogenetic trees but with significant differences. We also show the existence of interesting non-trivial persistent first homology groups in various language families. We give examples where explicit generators for the persistent first homology can be identified, some of which appear to correspond to homoplasy phenomena, while others may have an explanation in terms of historical linguistics, corresponding to known cases of syntactic borrowing across different language subfamilies.Comment: 83 pages, LaTeX, 44 figure
    • 

    corecore