10,510 research outputs found
To P or not to P: on the evidential nature of P-values and their place in scientific inference
The customary use of P-values in scientific research has been attacked as
being ill-conceived, and the utility of P-values has been derided. This paper
reviews common misconceptions about P-values and their alleged deficits as
indices of experimental evidence and, using an empirical exploration of the
properties of P-values, documents the intimate relationship between P-values
and likelihood functions. It is shown that P-values quantify experimental
evidence not by their numerical value, but through the likelihood functions
that they index. Many arguments against the utility of P-values are refuted and
the conclusion is drawn that P-values are useful indices of experimental
evidence. The widespread use of P-values in scientific research is well
justified by the actual properties of P-values, but those properties need to be
more widely understood.Comment: 31 pages, 9 figures and R cod
Conformal Prediction: a Unified Review of Theory and New Challenges
In this work we provide a review of basic ideas and novel developments about
Conformal Prediction -- an innovative distribution-free, non-parametric
forecasting method, based on minimal assumptions -- that is able to yield in a
very straightforward way predictions sets that are valid in a statistical sense
also in in the finite sample case. The in-depth discussion provided in the
paper covers the theoretical underpinnings of Conformal Prediction, and then
proceeds to list the more advanced developments and adaptations of the original
idea.Comment: arXiv admin note: text overlap with arXiv:0706.3188,
arXiv:1604.04173, arXiv:1709.06233, arXiv:1203.5422 by other author
Tradeoffs in the inductive inference of nearly minimal size programs
Inductive inference machines are algorithmic devices which attempt to synthesize (in the limit) programs for a function while they examine more and more of the graph of the function. There are many possible criteria of success. We study the inference of nearly minimal size programs. Our principal results imply that nearly minimal size programs can be inferred (in the limit) without loss of inferring power provided we are willing to tolerate a finite, but not uniformly, bounded, number of anomalies in the synthesized programs. On the other hand, there is a severe reduction of inferring power in inferring nearly minimal size programs if the maximum number of anomalies allowed is any uniform constant. We obtain a general characterization for the classes of recursive functions which can be synthesized by inferring nearly minimal size programs with anomalies. We also obtain similar results for Popperian inductive inference machines. The exact tradeoffs between mind change bounds on inductive inference machines and anomalies in synthesized programs are obtained. The techniques of recursive function theory including the recursion theorem are employed
Justifying Inference to the Best Explanation as a Practical Meta-Syllogism on Dialectical Structures
This article discusses how inference to the best explanation (IBE) can be justified as a practical meta-argument. It is, firstly, justified as a *practical* argument insofar as accepting the best explanation as true can be shown to further a specific aim. And because this aim is a discursive one which proponents can rationally pursue in--and relative to--a complex controversy, namely maximising the robustness of one's position, IBE can be conceived, secondly, as a *meta*-argument. My analysis thus bears a certain analogy to Sellars' well-known justification of inductive reasoning (Sellars 1969); it is based on recently developed theories of complex argumentation (Betz 2010, 2011)
Apperceptive patterning: Artefaction, extensional beliefs and cognitive scaffolding
In “Psychopower and Ordinary Madness” my ambition, as it relates to Bernard Stiegler’s recent literature, was twofold: 1) critiquing Stiegler’s work on exosomatization and artefactual posthumanism—or, more specifically, nonhumanism—to problematize approaches to media archaeology that rely upon technical exteriorization; 2) challenging how Stiegler engages with Giuseppe Longo and Francis Bailly’s conception of negative entropy. These efforts were directed by a prevalent techno-cultural qualifier: the rise of Synthetic Intelligence (including neural nets, deep learning, predictive processing and Bayesian models of cognition). This paper continues this project but first directs a critical analytic lens at the Derridean practice of the ontologization of grammatization from which Stiegler emerges while also distinguishing how metalanguages operate in relation to object-oriented environmental interaction by way of inferentialism. Stalking continental (Kapp, Simondon, Leroi-Gourhan, etc.) and analytic traditions (e.g., Carnap, Chalmers, Clark, Sutton, Novaes, etc.), we move from artefacts to AI and Predictive Processing so as to link theories related to technicity with philosophy of mind. Simultaneously drawing forth Robert Brandom’s conceptualization of the roles that commitments play in retrospectively reconstructing the social experiences that lead to our endorsement(s) of norms, we compliment this account with Reza Negarestani’s deprivatized account of intelligence while analyzing the equipollent role between language and media (both digital and analog)
Reinventing grounded theory: some questions about theory, ground and discovery
Grounded theory’s popularity persists after three decades of broad-ranging critique. In this article three problematic notions are discussed—‘theory,’ ‘ground’ and ‘discovery’—which linger in the continuing use and development of grounded theory procedures. It is argued that far from providing the epistemic security promised by grounded theory, these notions—embodied in continuing reinventions of grounded theory—constrain and distort qualitative inquiry, and that what is contrived is not in fact theory in any meaningful sense, that ‘ground’ is a misnomer when talking about interpretation and that what ultimately materializes following grounded theory procedures is less like discovery and more akin to invention. The procedures admittedly provide signposts for qualitative inquirers, but educational researchers should be wary, for the significance of interpretation, narrative and reflection can be undermined in the procedures of grounded theory
Gaming security by obscurity
Shannon sought security against the attacker with unlimited computational
powers: *if an information source conveys some information, then Shannon's
attacker will surely extract that information*. Diffie and Hellman refined
Shannon's attacker model by taking into account the fact that the real
attackers are computationally limited. This idea became one of the greatest new
paradigms in computer science, and led to modern cryptography.
Shannon also sought security against the attacker with unlimited logical and
observational powers, expressed through the maxim that "the enemy knows the
system". This view is still endorsed in cryptography. The popular formulation,
going back to Kerckhoffs, is that "there is no security by obscurity", meaning
that the algorithms cannot be kept obscured from the attacker, and that
security should only rely upon the secret keys. In fact, modern cryptography
goes even further than Shannon or Kerckhoffs in tacitly assuming that *if there
is an algorithm that can break the system, then the attacker will surely find
that algorithm*. The attacker is not viewed as an omnipotent computer any more,
but he is still construed as an omnipotent programmer.
So the Diffie-Hellman step from unlimited to limited computational powers has
not been extended into a step from unlimited to limited logical or programming
powers. Is the assumption that all feasible algorithms will eventually be
discovered and implemented really different from the assumption that everything
that is computable will eventually be computed? The present paper explores some
ways to refine the current models of the attacker, and of the defender, by
taking into account their limited logical and programming powers. If the
adaptive attacker actively queries the system to seek out its vulnerabilities,
can the system gain some security by actively learning attacker's methods, and
adapting to them?Comment: 15 pages, 9 figures, 2 tables; final version appeared in the
Proceedings of New Security Paradigms Workshop 2011 (ACM 2011); typos
correcte
Systems biology in animal sciences
Systems biology is a rapidly expanding field of research and is applied in a number of biological disciplines. In animal sciences, omics approaches are increasingly used, yielding vast amounts of data, but systems biology approaches to extract understanding from these data of biological processes and animal traits are not yet frequently used. This paper aims to explain what systems biology is and which areas of animal sciences could benefit from systems biology approaches. Systems biology aims to understand whole biological systems working as a unit, rather than investigating their individual components. Therefore, systems biology can be considered a holistic approach, as opposed to reductionism. The recently developed ‘omics’ technologies enable biological sciences to characterize the molecular components of life with ever increasing speed, yielding vast amounts of data. However, biological functions do not follow from the simple addition of the properties of system components, but rather arise from the dynamic interactions of these components. Systems biology combines statistics, bioinformatics and mathematical modeling to integrate and analyze large amounts of data in order to extract a better understanding of the biology from these huge data sets and to predict the behavior of biological systems. A ‘system’ approach and mathematical modeling in biological sciences are not new in itself, as they were used in biochemistry, physiology and genetics long before the name systems biology was coined. However, the present combination of mass biological data and of computational and modeling tools is unprecedented and truly represents a major paradigm shift in biology. Significant advances have been made using systems biology approaches, especially in the field of bacterial and eukaryotic cells and in human medicine. Similarly, progress is being made with ‘system approaches’ in animal sciences, providing exciting opportunities to predict and modulate animal traits
- …