969 research outputs found
Universal Probability-Free Conformal Prediction
We construct universal prediction systems in the spirit of Popper's
falsifiability and Kolmogorov complexity and randomness. These prediction
systems do not depend on any statistical assumptions (but under the IID
assumption they dominate, to within the usual accuracy, conformal prediction).
Our constructions give rise to a theory of algorithmic complexity and
randomness of time containing analogues of several notions and results of the
classical theory of Kolmogorov complexity and randomness.Comment: 27 page
Causality re-established
Causality never gained the status of a "law" or "principle" in physics. Some
recent literature even popularized the false idea that causality is a notion
that should be banned from theory. Such misconception relies on an alleged
universality of reversibility of laws of physics, based either on determinism
of classical theory, or on the multiverse interpretation of quantum theory, in
both cases motivated by mere interpretational requirements for realism of the
theory. Here, I will show that a properly defined unambiguous notion of
causality is a theorem of quantum theory, which is also a falsifiable
proposition of the theory. Such causality notion appeared in the literature
within the framework of operational probabilistic theories. It is a genuinely
theoretical notion, corresponding to establish a definite partial order among
events, in the same way as we do by using the future causal cone on Minkowski
space. The causality notion is logically completely independent of the
misidentified concept of "determinism", and, being a consequence of quantum
theory, is ubiquitous in physics. In addition, as classical theory can be
regarded as a restriction of quantum theory, causality holds also in the
classical case, although the determinism of the theory trivializes it. I then
conclude arguing that causality naturally establishes an arrow of time. This
implies that the scenario of the "Block Universe" and the connected "Past
Hypothesis" are incompatible with causality, and thus with quantum theory: they
both are doomed to remain mere interpretations and, as such, not falsifiable,
similar to the hypothesis of "super-determinism". This article is part of a
discussion meeting issue "Foundations of quantum mechanics and their impact on
contemporary society".Comment: Presented at the Royal Society of London, on 11/12/ 2017, at the
conference "Foundations of quantum mechanics and their impact on contemporary
society". To appear on Philosophical Transactions of the Royal Society
Ad Hoc Hypotheses and the Monsters within
Science is increasingly becoming automated. Tasks yet to be fully
automated include the conjecturing, modifying, extending and testing of hypotheses. At present scientists have an array of methods to help them carry out those tasks. These range from the well-articulated, formal and unexceptional rules to the semi-articulated and variously understood rules-of-thumb and intuitive hunches. If we are to hand over at least some of the aforementioned tasks to machines, we need to
clarify, refine and make formal, not to mention computable, even the more obscure of the methods scientists successfully employ in their inquiries. The focus of this essay is one such less-than-transparent methodological rule. I am here referring to the rule that ad hoc hypotheses ought to be spurned. This essay begins with a brief examination of some notable conceptions of ad hoc-ness in the philosophical literature. It is pointed out that there is a general problem afflicting most such conceptions, namely the intuitive judgments that are supposed to motivate them are not universally shared. Instead of getting bogged down in what ad hoc-ness exactly means, I shift the focus of the analysis to one undesirable feature often present in alleged cases of ad hoc-ness. I call this feature the ‘monstrousness’ of a hypothesis. A fully articulated formal account of this feature is presented by specifying what it is about the internal constitution of a hypothesis that makes it monstrous. Using this account, a monstrousness measure is then proposed and somewhat sketchily
compared with the minimum description length approach
Engineering Good: How Engineering Metaphors Help us to Understand the Moral Life and Change Society
Engineering can learn from ethics, but ethics can also learn from engineering. In this paper, I discuss what engineering metaphors can teach us about practical philosophy. Using metaphors such as calculation, performance, and open source, I articulate two opposing views of morality and politics: one that relies on images related to engineering as science and one that draws on images of engineering practice. I argue that the latter view and its metaphors provide a more adequate way to understand and guide the moral life. Responding to two problems of alienation and taking into account developments such as Fab Lab I then further explore the implications of this view for engineering and society
Coincidence between transcriptome analyses on different microarray platforms using a parametric framework
A parametric framework for the analysis of transcriptome data is demonstrated to yield coincident results when applied to data acquired using two different microarray platforms. Discrepancies among transcriptome studies are frequently reported, casting doubt on the reliability of collected data. The inconsistency among observations can be largely attributed to differences among the analytical frameworks employed for data analysis. The existing frameworks normalizes data against a standard determined from the data to be analyzed. In the present study, a parametric framework based on a strict model for normalization is applied to data acquired using an in-house printed chip and GeneChip. The framework is based on a common statistical characteristic of microarray data, and each data is normalized on the basis of a linear relationship with this model. In the proposed framework, the expressional changes observed and genes selected are coincident between platforms, achieving superior universality of data compared to other methods
Realism and Objectivism in Quantum Mechanics
The present study attempts to provide a consistent and coherent account of
what the world could be like, given the conceptual framework and results of
contemporary quantum theory. It is suggested that standard quantum mechanics
can, and indeed should, be understood as a realist theory within its domain of
application. It is pointed out, however, that a viable realist interpretation
of quantum theory requires the abandonment or radical revision of the classical
conception of physical reality and its traditional philosophical
presuppositions. It is argued, in this direction, that the conceptualization of
the nature of reality, as arising out of our most basic physical theory, calls
for a kind of contextual realism. Within the domain of quantum mechanics,
knowledge of 'reality in itself', 'the real such as it truly is' independent of
the way it is contextualized, is impossible in principle. In this connection,
the meaning of objectivity in quantum mechanics is analyzed, whilst the
important question concerning the nature of quantum objects is explored.Comment: 20 pages. arXiv admin note: substantial text overlap with
arXiv:0811.3696, arXiv:quant-ph/0502099, arXiv:0904.2702, arXiv:0904.2859,
arXiv:0905.013
Towards a realistic interpretation of quantum mechanics providing a model of the physical world
It is argued that a realistic interpretation of quantum mechanics is possible
and useful. Current interpretations, from Copenhagen to many worlds are
critically revisited. The difficulties for intuitive models of quantum physics
are pointed out and possible solutions proposed. In particular the existence of
discrete states, the quantum jumps, the alleged lack of objective properties,
measurement theory, the probabilistic character of quantum physics, the
wave-particle du- ality and the Bell inequalities are analyzed. The sketch of a
realistic picture of the quantum world is presented. It rests upon the assump-
tion that quantum mechanics is a stochastic theory whose randomness derives
from the existence of vacuum fields. They correspond to the vacuum fluctuations
of quantum field theory, but taken as real rather than virtual.Comment: 43 pages, paper throughout revised and somewhat enlarged, sections on
the Bell inequalities and on the sketch of a picture of the quantum world
rewritten, new references adde
Neural models that convince: Model hierarchies and other strategies to bridge the gap between behavior and the brain.
Computational modeling of the brain holds great promise as a bridge from brain to behavior. To fulfill this promise, however, it is not enough for models to be 'biologically plausible': models must be structurally accurate. Here, we analyze what this entails for so-called psychobiological models, models that address behavior as well as brain function in some detail. Structural accuracy may be supported by (1) a model's a priori plausibility, which comes from a reliance on evidence-based assumptions, (2) fitting existing data, and (3) the derivation of new predictions. All three sources of support require modelers to be explicit about the ontology of the model, and require the existence of data constraining the modeling. For situations in which such data are only sparsely available, we suggest a new approach. If several models are constructed that together form a hierarchy of models, higher-level models can be constrained by lower-level models, and low-level models can be constrained by behavioral features of the higher-level models. Modeling the same substrate at different levels of representation, as proposed here, thus has benefits that exceed the merits of each model in the hierarchy on its own
- …