58 research outputs found

    An investigation of a criterion- referenced test using G-theory, and factor and cluster analyses

    Full text link
    There has been relatively little research on analytical procedures for examin ing the dependability and validity of criterion-referenced tests especially when compared to similar investigations for norm-referenced ESL or EFL tests. This study used three analytical procedures, namely, G-theory, factor and cluster analyses, to investigate the dependability and validity of a criterion-referenced test developed at the University of California, Los Angeles in 1989. Dependability estimates showed that test scores are not equally depend able for all placement groups and are rather undependable for two out of the four placement groups. Factor analysis of test scores for the placement groups showed that though two-factor solutions were the best solutions for the different groups, there were differences in the way the subtests loaded in the different groups, with progressively fewer subtests loading on the second factor as ability increased. This finding led to the extension study with cluster analysis which showed that a number of students might have been differently placed if subtest scores were used to place them.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/68766/2/10.1177_026553229200900104.pd

    Informal Proofs and Computability

    Get PDF

    Universal Prediction

    Get PDF
    In this thesis I investigate the theoretical possibility of a universal method of prediction. A prediction method is universal if it is always able to learn from data: if it is always able to extrapolate given data about past observations to maximally successful predictions about future observations. The context of this investigation is the broader philosophical question into the possibility of a formal specification of inductive or scientific reasoning, a question that also relates to modern-day speculation about a fully automatized data-driven science. I investigate, in particular, a proposed definition of a universal prediction method that goes back to Solomonoff (1964) and Levin (1970). This definition marks the birth of the theory of Kolmogorov complexity, and has a direct line to the information-theoretic approach in modern machine learning. Solomonoff's work was inspired by Carnap's program of inductive logic, and the more precise definition due to Levin can be seen as an explicit attempt to escape the diagonal argument that Putnam (1963) famously launched against the feasibility of Carnap's program. The Solomonoff-Levin definition essentially aims at a mixture of all possible prediction algorithms. An alternative interpretation is that the definition formalizes the idea that learning from data is equivalent to compressing data. In this guise, the definition is often presented as an implementation and even as a justification of Occam's razor, the principle that we should look for simple explanations. The conclusions of my investigation are negative. I show that the Solomonoff-Levin definition fails to unite two necessary conditions to count as a universal prediction method, as turns out be entailed by Putnam's original argument after all; and I argue that this indeed shows that no definition can. Moreover, I show that the suggested justification of Occam's razor does not work, and I argue that the relevant notion of simplicity as compressibility is already problematic itself

    Universal Prediction

    Get PDF
    In this dissertation I investigate the theoretical possibility of a universal method of prediction. A prediction method is universal if it is always able to learn what there is to learn from data: if it is always able to extrapolate given data about past observations to maximally successful predictions about future observations. The context of this investigation is the broader philosophical question into the possibility of a formal specification of inductive or scientific reasoning, a question that also touches on modern-day speculation about a fully automatized data-driven science. I investigate, in particular, a specific mathematical definition of a universal prediction method, that goes back to the early days of artificial intelligence and that has a direct line to modern developments in machine learning. This definition essentially aims to combine all possible prediction algorithms. An alternative interpretation is that this definition formalizes the idea that learning from data is equivalent to compressing data. In this guise, the definition is often presented as an implementation and even as a justification of Occam's razor, the principle that we should look for simple explanations. The conclusions of my investigation are negative. I show that the proposed definition cannot be interpreted as a universal prediction method, as turns out to be exposed by a mathematical argument that it was actually intended to overcome. Moreover, I show that the suggested justification of Occam's razor does not work, and I argue that the relevant notion of simplicity as compressibility is problematic itself

    Universal Prediction:A Philosophical Investigation

    Get PDF

    The Epistemology of Simulation, Computation and Dynamics in Economics Ennobling Synergies, Enfeebling 'Perfection'

    Get PDF
    Lehtinen and Kuorikoski ([73]) question, provocatively, whether, in the context of Computing the Perfect Model, economists avoid - even positively abhor - reliance on simulation. We disagree with the mildly qualified affirmative answer given by them, whilst agreeing with some of the issues they raise. However there are many economic theoretic, mathematical (primarily recursion theoretic and constructive) - and even some philosophical and epistemological - infelicities in their descriptions, definitions and analysis. These are pointed out, and corrected; for, if not, the issues they raise may be submerged and subverted by emphasis just on the unfortunate, but essential, errors and misrepresentationsSimulation, Computation, Computable, Analysis, Dynamics, Proof, Algorithm

    Classical Theorems in Reverse Mathematics and Higher Recursion Theory

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Universal Prediction:A Philosophical Investigation

    Get PDF
    In dit proefschrift onderzoek ik de theoretische mogelijkheid van een universele methode van voorspelling. Een voorspelmethode is universeel als deze altijd in staat is te leren van invoergegevens: als deze altijd in staat is gegevens over eerdere observaties door te trekken naar maximaal succesvolle voorspellingen over toekomstige observaties. De context van dit onderzoek is de bredere filosofische vraag naar de mogelijkheid van een formele specificatie van inductief of wetenschappelijk redeneren, een vraag die raakt aan hedendaagse speculatie over een volledig geautomatiseerde datagedreven wetenschap. Meer bepaald onderzoek ik een specifieke wiskundige definitie van een universele voorspelmethode, die opgesteld werd in de begindagen van de kunstmatige intelligentie en die een directe lijn heeft naar moderne ontwikkelingen in machinaal leren. Deze definitie is in wezen een poging alle mogelijke voorspelalgoritmes samen te voegen. Een alternatieve interpretatie is dat deze definitie een formalisering geeft van het idee dat leren uit gegevens equivalent is aan het comprimeren ervan. In deze hoedanigheid wordt de definitie ook wel voorgesteld als een implementatie en zelfs een rechtvaardiging van Occams scheermes, het principe dat we moeten streven naar eenvoudige verklaringen. De bevindingen van mijn onderzoek zijn negatief. Ik toon aan dat de onderzochte definitie niet geïnterpreteerd kan worden als een universele voorspelmethode, zoals blijkt te volgen uit een wiskundig argument dat het juist bedoeld was te ontwijken. Bovendien laat ik zien dat de gesuggereerde rechtvaardiging van Occams scheermes niet opgaat, en beargumenteer ik dat de relevante notie van eenvoud als comprimeerbaarheid zelf problematisch is

    Universal Prediction

    Get PDF
    In this thesis I investigate the theoretical possibility of a universal method of prediction. A prediction method is universal if it is always able to learn from data: if it is always able to extrapolate given data about past observations to maximally successful predictions about future observations. The context of this investigation is the broader philosophical question into the possibility of a formal specification of inductive or scientific reasoning, a question that also relates to modern-day speculation about a fully automatized data-driven science. I investigate, in particular, a proposed definition of a universal prediction method that goes back to Solomonoff (1964) and Levin (1970). This definition marks the birth of the theory of Kolmogorov complexity, and has a direct line to the information-theoretic approach in modern machine learning. Solomonoff's work was inspired by Carnap's program of inductive logic, and the more precise definition due to Levin can be seen as an explicit attempt to escape the diagonal argument that Putnam (1963) famously launched against the feasibility of Carnap's program. The Solomonoff-Levin definition essentially aims at a mixture of all possible prediction algorithms. An alternative interpretation is that the definition formalizes the idea that learning from data is equivalent to compressing data. In this guise, the definition is often presented as an implementation and even as a justification of Occam's razor, the principle that we should look for simple explanations. The conclusions of my investigation are negative. I show that the Solomonoff-Levin definition fails to unite two necessary conditions to count as a universal prediction method, as turns out be entailed by Putnam's original argument after all; and I argue that this indeed shows that no definition can. Moreover, I show that the suggested justification of Occam's razor does not work, and I argue that the relevant notion of simplicity as compressibility is already problematic itself

    Universal Prediction

    Get PDF
    In this dissertation I investigate the theoretical possibility of a universal method of prediction. A prediction method is universal if it is always able to learn what there is to learn from data: if it is always able to extrapolate given data about past observations to maximally successful predictions about future observations. The context of this investigation is the broader philosophical question into the possibility of a formal specification of inductive or scientific reasoning, a question that also touches on modern-day speculation about a fully automatized data-driven science. I investigate, in particular, a specific mathematical definition of a universal prediction method, that goes back to the early days of artificial intelligence and that has a direct line to modern developments in machine learning. This definition essentially aims to combine all possible prediction algorithms. An alternative interpretation is that this definition formalizes the idea that learning from data is equivalent to compressing data. In this guise, the definition is often presented as an implementation and even as a justification of Occam's razor, the principle that we should look for simple explanations. The conclusions of my investigation are negative. I show that the proposed definition cannot be interpreted as a universal prediction method, as turns out to be exposed by a mathematical argument that it was actually intended to overcome. Moreover, I show that the suggested justification of Occam's razor does not work, and I argue that the relevant notion of simplicity as compressibility is problematic itself
    corecore