46 research outputs found

    Classical computing, quantum computing, and Shor's factoring algorithm

    Get PDF
    This is an expository talk written for the Bourbaki Seminar. After a brief introduction, Section 1 discusses in the categorical language the structure of the classical deterministic computations. Basic notions of complexity icluding the P/NP problem are reviewed. Section 2 introduces the notion of quantum parallelism and explains the main issues of quantum computing. Section 3 is devoted to four quantum subroutines: initialization, quantum computing of classical Boolean functions, quantum Fourier transform, and Grover's search algorithm. The central Section 4 explains Shor's factoring algorithm. Section 5 relates Kolmogorov's complexity to the spectral properties of computable function. Appendix contributes to the prehistory of quantum computing.Comment: 27 pp., no figures, amste

    Computability Theory

    Get PDF
    Computability and computable enumerability are two of the fundamental notions of mathematics. Interest in effectiveness is already apparent in the famous Hilbert problems, in particular the second and tenth, and in early 20th century work of Dehn, initiating the study of word problems in group theory. The last decade has seen both completely new subareas develop as well as remarkable growth in two-way interactions between classical computability theory and areas of applications. There is also a great deal of work on algorithmic randomness, reverse mathematics, computable analysis, and in computable structure theory/computable model theory. The goal of this workshop is to bring together researchers representing different aspects of computability theory to discuss recent advances, and to stimulate future work

    Inductive inference of recursive functions: complexity bounds

    Get PDF
    This survey includes principal results on complexity of inductive inference for recursively enumerable classes of total recursive functions. Inductive inference is a process to find an algorithm from sample computations. In the case when the given class of functions is recursively enumerable it is easy to define a natural complexity measure for the inductive inference, namely, the worst-case mindchange number for the first n functions in the given class. Surely, the complexity depends not only on the class, but also on the numbering, i.e. which function is the first, which one is the second, etc. It turns out that, if the result of inference is Goedel number, then complexity of inference may vary between log n+o(log2n ) and an arbitrarily slow recursive function. If the result of the inference is an index in the numbering of the recursively enumerable class, then the complexity may go up to const-n. Additionally, effects previously found in the Kolmogorov complexity theory are discovered in the complexity of inductive inference as well

    Renormalization and computation I: motivation and background

    No full text

    Master index volumes 181–190

    Get PDF

    Language Model Co-occurrence Linking for Interleaved Activity Discovery

    Get PDF
    As ubiquitous computer and sensor systems become abundant, the potential for automatic identification and tracking of human behaviours becomes all the more evident. Annotating complex human behaviour datasets to achieve ground truth for supervised training can however be extremely labour-intensive, and error prone. One possible solution to this problem is activity discovery: the identification of activities in an unlabelled dataset by means of an unsupervised algorithm. This paper presents a novel approach to activity discovery that utilises deep learning based language production models to construct a hierarchical, tree-like structure over a sequential vector of sensor events. Our approach differs from previous work in that it explicitly aims to deal with interleaving (switching back and forth between between activities) in a principled manner, by utilising the long-term memory capabilities of a recurrent neural network cell. We present our approach and test it on a realistic dataset to evaluate its performance. Our results show the viability of the approach and that it shows promise for further investigation. We believe this is a useful direction to consider in accounting for the continually changing nature of behaviours

    Universal Prediction

    Get PDF
    In this thesis I investigate the theoretical possibility of a universal method of prediction. A prediction method is universal if it is always able to learn from data: if it is always able to extrapolate given data about past observations to maximally successful predictions about future observations. The context of this investigation is the broader philosophical question into the possibility of a formal specification of inductive or scientific reasoning, a question that also relates to modern-day speculation about a fully automatized data-driven science. I investigate, in particular, a proposed definition of a universal prediction method that goes back to Solomonoff (1964) and Levin (1970). This definition marks the birth of the theory of Kolmogorov complexity, and has a direct line to the information-theoretic approach in modern machine learning. Solomonoff's work was inspired by Carnap's program of inductive logic, and the more precise definition due to Levin can be seen as an explicit attempt to escape the diagonal argument that Putnam (1963) famously launched against the feasibility of Carnap's program. The Solomonoff-Levin definition essentially aims at a mixture of all possible prediction algorithms. An alternative interpretation is that the definition formalizes the idea that learning from data is equivalent to compressing data. In this guise, the definition is often presented as an implementation and even as a justification of Occam's razor, the principle that we should look for simple explanations. The conclusions of my investigation are negative. I show that the Solomonoff-Levin definition fails to unite two necessary conditions to count as a universal prediction method, as turns out be entailed by Putnam's original argument after all; and I argue that this indeed shows that no definition can. Moreover, I show that the suggested justification of Occam's razor does not work, and I argue that the relevant notion of simplicity as compressibility is already problematic itself

    Universal Prediction

    Get PDF
    In this dissertation I investigate the theoretical possibility of a universal method of prediction. A prediction method is universal if it is always able to learn what there is to learn from data: if it is always able to extrapolate given data about past observations to maximally successful predictions about future observations. The context of this investigation is the broader philosophical question into the possibility of a formal specification of inductive or scientific reasoning, a question that also touches on modern-day speculation about a fully automatized data-driven science. I investigate, in particular, a specific mathematical definition of a universal prediction method, that goes back to the early days of artificial intelligence and that has a direct line to modern developments in machine learning. This definition essentially aims to combine all possible prediction algorithms. An alternative interpretation is that this definition formalizes the idea that learning from data is equivalent to compressing data. In this guise, the definition is often presented as an implementation and even as a justification of Occam's razor, the principle that we should look for simple explanations. The conclusions of my investigation are negative. I show that the proposed definition cannot be interpreted as a universal prediction method, as turns out to be exposed by a mathematical argument that it was actually intended to overcome. Moreover, I show that the suggested justification of Occam's razor does not work, and I argue that the relevant notion of simplicity as compressibility is problematic itself

    Universal Prediction:A Philosophical Investigation

    Get PDF
    corecore