2,163 research outputs found

    (Psycho-)Analysis of Benchmark Experiments

    Get PDF
    It is common knowledge that certain characteristics of data sets -- such as linear separability or sample size -- determine the performance of learning algorithms. In this paper we propose a formal framework for investigations on this relationship. The framework combines three, in their respective scientific discipline well-established, methods. Benchmark experiments are the method of choice in machine and statistical learning to compare algorithms with respect to a certain performance measure on particular data sets. To realize the interaction between data sets and algorithms, the data sets are characterized using statistical and information-theoretic measures; a common approach in the field of meta learning to decide which algorithms are suited to particular data sets. Finally, the performance ranking of algorithms on groups of data sets with similar characteristics is determined by means of recursively partitioning Bradley-Terry models, that are commonly used in psychology to study the preferences of human subjects. The result is a tree with splits in data set characteristics which significantly change the performances of the algorithms. The main advantage is the automatic detection of these important characteristics. The framework is introduced using a simple artificial example. Its real-word usage is demonstrated by means of an application example consisting of thirteen well-known data sets and six common learning algorithms. All resources to replicate the examples are available online

    Case study of a student with an emotional behavioral disorder: an increase in reading fluency and its effect on reading comprehension and behavior in the general education classroom.

    Get PDF
    This study investigated the effect an increase in reading fluency had on reading comprehension and on-task behavior in the general education classroom. The participant was an 18 year-old with a cognitive disability and emotional behavioral disorder. The intervention included repeated reading, vocabulary, partner reading, comprehension questions, and weekly classroom observations. Data collected throughout the course of the intervention were pre- and post-tests, parent and teacher questionnaires, and measurements of reading fluency, comprehension, and on-task behavior in the classroom. Results showed small gains in reading fluency and on-task behavior in the classroom, but no change in reading comprehension. Some limitations that may have affected progress include: the limited number and length of intervention sessions, amount of time devoted to classroom observations, and participant\u27s attendance. Further research is needed on effective literacy interventions to determine a relationship between reading fluency, comprehension, and on-task behavior in the classroom for students with disabilities

    Symplectic Cuts and Projection Quantization

    Get PDF
    The recently proposed projection quantization, which is a method to quantize particular subspaces of systems with known quantum theory, is shown to yield a genuine quantization in several cases. This may be inferred from exact results established within symplectic cutting.Comment: 12 pages, v2: additional examples and a new reference to related wor

    Getting It Right: Employment Subsidy or Minimum Wage?

    Full text link
    In monopsony models of the labour market either a minimum wage or an employment subsidy financed by a lump sum tax on profits can achieve the efficient level of employment and output. Incorporating working conditions into a monopsony model where higher wages raise firm labour supply, but less attractive working conditions reduce it, changes these policy implications. Specifically, a minimum wage policy could, in contrast to an employment subsidy, cause working conditions to deteriorate and welfare to fall. Empirical evidence from the Republic of Trinidad and Tobago shows that a minimum wage may indeed cause working conditions to worsen

    Getting it right: Employment subsidy or minimum wage ? : Evidence from Trinidad and Tobago

    Full text link

    Efficiency Wages and Effort: Are Hard Jobs Better?

    Full text link
    Efficiency wage theory predicts that the wage per unit of effort will be lower in intensively monitored sectors. This wage differential will increase in effort. Using employer-employee matched data from Ghana we provide evidence supporting this hypothesis

    Identifying Patient-Specific Root Causes with the Heteroscedastic Noise Model

    Full text link
    Complex diseases are caused by a multitude of factors that may differ between patients even within the same diagnostic category. A few underlying root causes may nevertheless initiate the development of disease within each patient. We therefore focus on identifying patient-specific root causes of disease, which we equate to the sample-specific predictivity of the exogenous error terms in a structural equation model. We generalize from the linear setting to the heteroscedastic noise model where Y=m(X)+εσ(X)Y = m(X) + \varepsilon\sigma(X) with non-linear functions m(X)m(X) and σ(X)\sigma(X) representing the conditional mean and mean absolute deviation, respectively. This model preserves identifiability but introduces non-trivial challenges that require a customized algorithm called Generalized Root Causal Inference (GRCI) to extract the error terms correctly. GRCI recovers patient-specific root causes more accurately than existing alternatives

    Sample-Specific Root Causal Inference with Latent Variables

    Full text link
    Root causal analysis seeks to identify the set of initial perturbations that induce an unwanted outcome. In prior work, we defined sample-specific root causes of disease using exogenous error terms that predict a diagnosis in a structural equation model. We rigorously quantified predictivity using Shapley values. However, the associated algorithms for inferring root causes assume no latent confounding. We relax this assumption by permitting confounding among the predictors. We then introduce a corresponding procedure called Extract Errors with Latents (EEL) for recovering the error terms up to contamination by vertices on certain paths under the linear non-Gaussian acyclic model. EEL also identifies the smallest sets of dependent errors for fast computation of the Shapley values. The algorithm bypasses the hard problem of estimating the underlying causal graph in both cases. Experiments highlight the superior accuracy and robustness of EEL relative to its predecessors

    Algebroid Yang-Mills Theories

    Full text link
    A framework for constructing new kinds of gauge theories is suggested. Essentially it consists in replacing Lie algebras by Lie or Courant algebroids. Besides presenting novel topological theories defined in arbitrary spacetime dimensions, we show that equipping Lie algebroids E with a fiber metric having sufficiently many E-Killing vectors leads to an astonishingly mild deformation of ordinary Yang-Mills theories: Additional fields turn out to carry no propagating modes. Instead they serve as moduli parameters gluing together in part different Yang-Mills theories. This leads to a symmetry enhancement at critical points of these fields, as is also typical for String effective field theories.Comment: 4 pages; v3: Minor rewording of v1, version to appear in Phys. Rev. Let
    corecore