220 research outputs found
Gone Till November: A disagreement in Einstein scholarship
The present paper examines an episode from the historiography of the genesis of general relativity. Einstein rejected a certain theory in the so-called “Zurich notebook” in 1912-13, but he reinstated the same theory for a short period of time in the November of 1915. Why did Einstein reject the theory at first, and why did he change his mind later? The group of Einstein scholars who reconstructed Einstein’s reasoning in the Zurich notebook disagree on how to answer these questions. According to the “majority view”, Einstein was unaware of so-called “coordinate conditions”, and he relied on so-called “coordinate restrictions”. John Norton, on the other hand, claims that Einstein must have had coordinate conditions all along, but committed a different mistake, which he would repeat in the context of the famous “hole argument”. After an account of the two views, and of the reactions by the respective opponents, I will probe the two views for weaknesses, and try to determine how we might settle the disagreement. Finally, I will discuss emerging methodological issues
Modeling causal structures: Volterra's struggle and Darwin's success
The Lotka-Volterra predator-prey-model is a widely known example of model-based science. Here we reexamine Vito Volterra's and Umberto D'Ancona's original publications on the model, and in particular their methodological reflections. On this basis we develop several ideas pertaining to the philosophical debate on the scientific practice of modeling. First, we show that Volterra and D'Ancona chose modeling because the problem in hand could not be approached by more direct methods such as causal inference. This suggests a philosophically insightful motivation for choosing the strategy of modeling. Second, we show that the development of the model follows a trajectory from a "how possibly” to a "how actually” model. We discuss how and to what extent Volterra and D'Ancona were able to advance their model along that trajectory. It turns out they were unable to establish that their model was fully applicable to any system. Third, we consider another instance of model-based science: Darwin's model of the origin and distribution of coral atolls in the Pacific Ocean. Darwin argued more successfully that his model faithfully represents the causal structure of the target system, and hence that it is a "how actually” mode
Independence from kinetoplast DNA maintenance and expression is associated with multi-drug resistance in Trypanosoma brucei in vitro
It is well known that several antitrypanosomatid drugs accumulate in the parasite's mitochondrion, where they often bind to the organellar DNA, the kinetoplast. To what extent this property relates to the mode of action of these compounds has remained largely unquantified. Here we show that single point mutations that remove the dependence of laboratory strains of the sleeping sickness parasite Trypanosoma brucei on a functional kinetoplast result in significant resistance to the diamidine and phenanthridine drug classes
The Volterra Principle Generalized
Michael Weisberg and Kenneth Reisman argue that the so-called Volterra Principle can be derived from a series of predator-prey models, and that, therefore, the Volterra Principle is a prime example for the practice of robustness analysis. In the present paper, I give new results regarding the Volterra Principle, extending Weisberg’s and Reisman’s work, and I discuss the consequences of these new results for robustness analysis. I argue that we do not end up with multiple, independent models, but rather with one general model. I identify the kind of situation in which this generalization approach may occur, I analyze the generalized Volterra Principle from an explanatory perspective, and I propose that cases in which the generalization approach may not apply are in fact cases of mathematical coincidences
Authorship and the Politics and Ethics of LLM Watermarks
Recently, watermarking schemes for large language models (LLMs) have been
proposed to distinguish text generated by machines and by humans. The present
paper explores philosophical, political, and ethical ramifications of
implementing and using watermarking schemes. A definition of authorship that
includes both machines (LLMs) and humans is proposed to serve as a backdrop. It
is argued that private watermarks may provide private companies with sweeping
rights to determine authorship, which is incompatible with traditional
standards of authorship determination. Then, possible ramifications of the
so-called entropy dependence of watermarking mechanisms are explored. It is
argued that entropy may vary for different, socially salient groups. This could
lead to group dependent rates at which machine generated text is detected.
Specifically, groups more interested in low entropy text may face the challenge
that it is harder to detect machine generated text that is of interest to them.Comment: 25 pages. Comments welcom
ML Interpretability: Simple Isn't Easy
The interpretability of ML models is important, but it is not clear what it
amounts to. So far, most philosophers have discussed the lack of
interpretability of black-box models such as neural networks, and methods such
as explainable AI that aim to make these models more transparent. The goal of
this paper is to clarify the nature of interpretability by focussing on the
other end of the 'interpretability spectrum'. The reasons why some models,
linear models and decision trees, are highly interpretable will be examined,
and also how more general models, MARS and GAM, retain some degree of
interpretability. I find that while there is heterogeneity in how we gain
interpretability, what interpretability is in particular cases can be
explicated in a clear manner.Comment: 22 pages, 4 figure
The Silent Hexagon: Explaining Comb Structures
The paper presents, and discusses, four candidate explanations of the structure, and construction, of the bees’ honeycomb. So far, philosophers have used one of these four explanations, based on the mathematical Honeycomb Conjecture, while the other three candidate explanations have been ignored. I use the four cases to resolve a dispute between Christopher Pincock (2012) and Alan Baker (2015) about the Honeycomb Conjecture explanation. Finally, I find that the two explanations focusing on the construction mechanism are more promising than those focusing exclusively on the resulting, optimal structure. The main reason for this is that optimal structures do not uniquely determine the relevant optimiza- tion leading to the optimal structure
Gone Till November: A disagreement in Einstein scholarship
The present paper examines an episode from the historiography of the genesis of general relativity. Einstein rejected a certain theory in the so-called “Zurich notebook” in 1912-13, but he reinstated the same theory for a short period of time in the November of 1915. Why did Einstein reject the theory at first, and why did he change his mind later? The group of Einstein scholars who reconstructed Einstein’s reasoning in the Zurich notebook disagree on how to answer these questions. According to the “majority view”, Einstein was unaware of so-called “coordinate conditions”, and he relied on so-called “coordinate restrictions”. John Norton, on the other hand, claims that Einstein must have had coordinate conditions all along, but committed a different mistake, which he would repeat in the context of the famous “hole argument”. After an account of the two views, and of the reactions by the respective opponents, I will probe the two views for weaknesses, and try to determine how we might settle the disagreement. Finally, I will discuss emerging methodological issues
- …
