12965 research outputs found
Sort by
Tatiana Ehrenfest-Afanassjewa’s Contributions to Dimensional Analysis
Tatiana Ehrenfest-Afanassjewa was an important physicist, mathematician, and educator in 20th century Europe. While some of her work has recently undergone reevaluation, little has been said regarding her groundbreaking work on dimensional analysis. This, in part, reflects an unfortunate dismissal of her interventions in such foundational debates by her contemporaries. In spite of this, her work on the generalized theory of homogeneous equations provides a mathematically sound foundation for dimensional analysis and has found some appreciation and development. It remains to provide a historical account of Ehrenfest-Afanassjewa's use of the theory of homogeneous functions to ground (and limit) dimensional analysis. We take as a central focus Ehrenfest-Afanassjewa's contributions to a debate on the foundations of dimensional analysis started by physicist Richard Tolman in 1914. I go on to suggest an interpretation of the more thoroughgoing intervention Ehrenfest-Afanassjewa makes in 1926 based on this earlier context, especially her limited rehabilitation of a "theory of similitude" in contradistinction to dimensional analysis. It is shown that Ehrenfest-Afanassjewa has made foundational contributions to the mathematical foundations and methodology of dimensional analysis, our conception of the relation between constants and laws, and our understanding of the quantitative nature of physics, which remain of value
Cantor's Illusion simplified
This analysis shows Cantor's diagonal definition in his 1891 paper was not compatible with his horizontal enumeration of the infinite set M. The diagonal sequence was a counterfeit which he used to produce an apparent exclusion of a single sequence to prove the cardinality of M is greater than the cardinality of the set of integers N
A No-Go Theorem for psi-ontic Models? No, Surely Not!
In a recent reply to my criticisms (Found Phys 55:5, 2025), Carcassi, Oldofredi and Aidala admitted that their no-go result for psi-ontic models is based on the implicit assumption that all states are equally distinguishable, but insisted that this assumption is a part of the psi-ontic models defined by Harrigan and Spekkens, and thus their result is still valid. In this note, I refute their argument again
AI4Science and the Context Distinction
“AI4Science” refers to the use of Artificial Intelligence (AI) in scientific research. As AI systems become more widely used in science, we need guidelines for when such uses are acceptable and when they are unacceptable. To that end, I propose that the distinction between the context of discovery and the context of justification, which comes from philosophy of science, may provide a preliminary but still useful guideline for acceptable uses of AI in science. Given that AI systems used in scientific research are black boxes, for the most part, we should use such systems in the context of discovery but not in the context of justification. The former refers to processes of idea generation, which may be unproblematically opaque whether they occur in human brains or artificial neural networks, whereas the latter refers to scientific methods by which scientific ideas are tested, confirmed, verified, and justified, which should be transparent
Virtual Time and Execution of Algorithms in Static Networks
A concept for the emergence of a time-equivalent property from a static network of
interconnected states is shown. This property is referred to as virtual time. For each state, a set
of coefficients is defined, which locally represents the information embedded in the network’s
connectivity. Network structures denoted as repellers feature successive splits into a steadily
increasing number of quantum states. They convey an equivalent calculation of their static
connectivity coefficients and virtual particles dynamically propagating within them. Strong
indications are provided, that static networks are virtual Turing complete machines for
algorithms with finite runtime. This opens up a wide range of possible encodings for said
coefficients and motivates further research
Beyond transparency: computational reliabilism as an externalist epistemology of algorithms
This chapter is interested in the epistemology of algorithms. As I intend to approach the topic, this is an issue about epistemic justification. Current approaches to justification emphasize the transparency of algorithms, which entails elucidating their internal mechanisms –such as functions and variables– and demonstrating how (or that) these produce outputs. Thus, the mode of justification through transparency is contingent on what can be shown about the algorithm and, in this sense, is internal to the algorithm. In contrast, I advocate for an externalist epistemology of algorithms that I term computational reliabilism (CR). While I have previously introduced and examined CR in the field of computer simulations ([42, 53, 4]), this chapter extends this reliabilist epistemology to encompass a broader spectrum of algorithms utilized in various scientific disciplines, with a particular emphasis on machine learning applications. At its core, CR posits that an algorithm’s output is justified if it is produced by a reliable algorithm. A reliable algorithm is one that has been specified, coded, used, and maintained utilizing reliability indicators. These reliability indicators stem from formal methods, algorithmic metrics, expert competencies, cultures of research, and other scientific endeavors. The primary aim of this chapter is to delineate the foundations of CR, explicate its operational mechanisms, and outline its potential as an externalist epistemology of algorithms
One-Factor versus Two-Factor Theory of Delusion: Replies to Sullivan-Bissett and Noordhof
I would like to thank Sullivan-Bissett and Noordhof for their stimulating comments on my 2023 paper in Neuroethics. In this reply, I will (1) articulate some deeper disagreements that may underpin our disagreement on the nature of delusion, (2) clarify their misrepresentation of my previous arguments as a defence of the two-factor theory in particular, and (3) finally conduct a comparison between the Maherian one-factor theory and the two-factor theory, showing that the two-factor theory is better supported by evidence
Schrödinger, Szilard, and the emergence of the EPR argument
Einstein, Podolsky and Rosen’s “Can quantum mechanical description of reality be considered complete?”(1935) and Schrödinger’s “Die gegenwärtige Situation in der Quantenmechanik” (1936) are commonly accepted as the seminal papers for the modern study of quantum mechanical entanglement. However, not much has been known about the prehistory of these papers. We were able to trace the development of both Einstein’s and Schrödinger’s thought, using Schrödinger’s correspondence and especially his extensive research notes. We especially found that they both got important input from Leo Szilard, who proposed in 1931 a thought experiment that is a direct precursor to the EPR experiment and a quantum mechanical state that is essentially identical to the EPR state
On variable non-dependence of first-order formulas
In this paper, we introduce a concept of non-dependence of variables in formulas. A formula in first-order logic is non-dependent of a variable if the truth value of this formula does not depend on the value of that variable. This variable non-dependence can be subject to constraints on the value of some variables which appear in the formula, these constraints are expressed by another first-order formula. After investigating its basic properties, we apply this concept to simplify convoluted formulas by bringing out and discarding redundant nested quantifiers. Such convoluted formulas typically appear when one uses a translation function interpreting a theory into another
24 Philosophical Issues in Medical Imaging
This chapter aims to shed light on the normative questions raised by medical imaging (MI), paving the way for interdisciplinary dialogue and further philosophical exploration. MI comprises noninvasive techniques aimed at visualizing internal human body structures to aid in explanation, diagnosis, and monitoring of health conditions. MI requires interpretation by specialized professionals, and is routinely employed across medical disciplines. It is entrenched in clinical guidelines and therapeutic interventions. Moreover, it is a dynamic research field, witnessing ongoing technological advancements. After surveying philosophical issues arising from MI, which are relatively unexplored, the chapter focuses on the epistemology of diagnostic imaging. Specifically, it delves into what constitutes an image as evidence and how radiological procedures generate knowledge. The discussion dissects three facets of the radiological process: image interpretation, radiological reporting, and semantic analysis. Each facet carries distinct epistemic implications, as errors can manifest in various ways, affecting the acquisition of patient-relevant knowledge