169 research outputs found
Non-factive Understanding: A Statement and Defense
In epistemology and philosophy of science, there has been substantial debate about truthâs relation to understanding. âNon-factivistsâ hold that radical departures from the truth are not always barriers to understanding; âquasi-factivistsâ demur. The most discussed example concerns scientistsâ use of idealizations in certain derivations of the ideal gas law from statistical mechanics. Yet, these discussions have suffered from confusions about the relevant science, as well as conceptual confusions. Addressing this example, we shall argue that the ideal gas law is best interpreted as favoring non-factivism about understanding, but only after delving a bit deeper into the statistical mechanics that has informed these arguments and stating more precisely what non-factivism entails. Along the way, we indicate where earlier discussions have gone astray, and highlight how a naturalistic approach furnishes more nuanced normative theses about the interaction of rationality, understanding, and epistemic value
Non-Factive Understanding: A Statement and Defense
In epistemology and philosophy of science, there has been substantial debate about truthâs relation to understanding. âNon-factivistsâ hold that radical departures from the truth are not always barriers to understanding; âquasi-factivistsâ demur. The most discussed example concerns scientistsâ use of idealizations in certain derivations of the ideal gas law from statistical mechanics. Yet, these discussions have suffered from confusions about the relevant science, as well as conceptual confusions. To that end, we shall argue that the ideal gas law is best interpreted as favoring non-factivism about understanding, but only after delving a bit deeper into the statistical mechanics that has informed these arguments and stating more precisely what non-factivism entails. Along the way, we indicate where earlier discussions have gone astray, and highlight how a naturalistic approach furnishes more nuanced normative theses about the interaction of rationality, understanding, and epistemic value
The (Im)possibility of Simple Search-To-Decision Reductions for Approximation Problems
We study the question of when an approximate search optimization problem is harder than the associated decision problem. Specifically, we study a natural and quite general model of black-box search-to-decision reductions, which we call branch-and-bound reductions (in analogy with branch-and-bound algorithms). In this model, an algorithm attempts to minimize (or maximize) a function f: D ? ?_{? 0} by making oracle queries to h_f : ? ? ?_{? 0} satisfying
min_{x ? S} f(x) ? h_f(S) ? ? ? min_{x ? S} f(x) (*)
for some ? ? 1 and any subset S in some allowed class of subsets ? of the domain D. (When the goal is to maximize f, h_f instead yields an approximation to the maximal value of f over S.) We show tight upper and lower bounds on the number of queries q needed to find even a ?\u27-approximate minimizer (or maximizer) for quite large ?\u27 in a number of interesting settings, as follows.
- For arbitrary functions f : {0,1}? ? ?_{? 0}, where ? contains all subsets of the domain, we show that no branch-and-bound reduction can achieve ?\u27 ? ?^{n/log q}, while a simple greedy approach achieves essentially ?^{n/log q}.
- For a large class of MAX-CSPs, where ? := {S_w} contains each set of assignments to the variables induced by a partial assignment w, we show that no branch-and-bound reduction can do significantly better than essentially a random guess, even when the oracle h_f guarantees an approximation factor of ? ? 1+?{log(q)/n}.
- For the Traveling Salesperson Problem (TSP), where ? := {S_p} contains each set of tours extending a path p, we show that no branch-and-bound reduction can achieve ?\u27 ? (?-1) n/log q. We also prove a nearly matching upper bound in our model.
These results show an oracle model in which approximate search and decision are strongly separated. (In particular, our result for TSP can be viewed as a negative answer to a question posed by Bellare and Goldwasser (SIAM J. Comput. 1994), though only in an oracle model.) We also note two alternative interpretations of our results. First, if we view h_f as a data structure, then our results unconditionally rule out black-box search-to-decision reductions for certain data structure problems. Second, if we view h_f as an efficiently computable heuristic, then our results show that any reasonably efficient branch-and-bound algorithm requires more guarantees from its heuristic than simply Eq. (*).
Behind our results is a "useless oracle lemma," which allows us to argue that under certain conditions the oracle h_f is "useless," and which might be of independent interest. See also the full version [Alexander Golovnev et al., 2022]
No transcriptional compensation for extreme gene dosage imbalance in fragmented bacterial endosymbionts of cicadas
Recommended from our members
Characterization of the proteome of Theobroma cacao beans by nano-UHPLC-ESI MS/MS
Cocoa seed storage proteins play an important role in flavour development as aroma precursors are formed from their degradation during fermentation. Major proteins in the beans of Theobroma cacao are the storage proteins belonging to the vicilin and albumin classes. Although both these classes of proteins have been extensively characterized, there is still limited information on the expression and abundance of other proteins present in cocoa beans. This work is the first attempt to characterize the whole cocoa bean proteome by nanoUHPLC-ESI MS/MS analysis using tryptic digests of cocoa bean protein extracts. The results of this analysis show that a total of 906 proteins could be identified using a species-specific Theobroma cacao database. The majority of the identified proteins were involved with metabolism and energy. Additionally, a significant number of the identified proteins were linked to protein synthesis and processing. Several proteins were also involved with plant response to stress conditions and defence. Albumin and vicilin storage proteins showed the highest intensity values among all detected proteins, although only seven entries were identified as storage proteins. A comparison of MS/MS data searches carried out against larger non-specific databases confirmed that using a species-specific database can increase the number of identified proteins, and at the same time reduce the number of false positives. The results of this work will be useful in developing tools which can allow the comparison of the proteomic profile of cocoa beans from different genotypes and geographic origins. Data are available via ProteomeXchange with identifier PXD005586
Improving the predictions of ML-corrected climate models with novelty detection
While previous works have shown that machine learning (ML) can improve the
prediction accuracy of coarse-grid climate models, these ML-augmented methods
are more vulnerable to irregular inputs than the traditional physics-based
models they rely on. Because ML-predicted corrections feed back into the
climate model's base physics, the ML-corrected model regularly produces out of
sample data, which can cause model instability and frequent crashes. This work
shows that adding semi-supervised novelty detection to identify out-of-sample
data and disable the ML-correction accordingly stabilizes simulations and
sharply improves the quality of predictions. We design an augmented climate
model with a one-class support vector machine (OCSVM) novelty detector that
provides better temperature and precipitation forecasts in a year-long
simulation than either a baseline (no-ML) or a standard ML-corrected run. By
improving the accuracy of coarse-grid climate models, this work helps make
accurate climate models accessible to researchers without massive computational
resources.Comment: Appearing at Tackling Climate Change with Machine Learning Workshop
at NeurIPS 202
Non-Factive Understanding: A Statement and Defense
In epistemology and philosophy of science, there has been substantial debate about truthâs relation to understanding. âNon-factivistsâ hold that radical departures from the truth are not always barriers to understanding; âquasi-factivistsâ demur. The most discussed example concerns scientistsâ use of idealizations in certain derivations of the ideal gas law from statistical mechanics. Yet, these discussions have suffered from confusions about the relevant science, as well as conceptual confusions. To that end, we shall argue that the ideal gas law is best interpreted as favoring non-factivism about understanding, but only after delving a bit deeper into the statistical mechanics that has informed these arguments and stating more precisely what non-factivism entails. Along the way, we indicate where earlier discussions have gone astray, and highlight how a naturalistic approach furnishes more nuanced normative theses about the interaction of rationality, understanding, and epistemic value
Lattice Problems Beyond Polynomial Time
We study the complexity of lattice problems in a world where algorithms,
reductions, and protocols can run in superpolynomial time, revisiting four
foundational results: two worst-case to average-case reductions and two
protocols. We also show a novel protocol.
1. We prove that secret-key cryptography exists if
-approximate SVP is hard for -time
algorithms. I.e., we extend to our setting (Micciancio and Regev's improved
version of) Ajtai's celebrated polynomial-time worst-case to average-case
reduction from -approximate SVP to SIS.
2. We prove that public-key cryptography exists if
-approximate SVP is hard for -time
algorithms. This extends to our setting Regev's celebrated polynomial-time
worst-case to average-case reduction from -approximate
SVP to LWE. In fact, Regev's reduction is quantum, but ours is classical,
generalizing Peikert's polynomial-time classical reduction from
-approximate SVP.
3. We show a -time coAM protocol for -approximate
CVP, generalizing the celebrated polynomial-time protocol for -CVP due to Goldreich and Goldwasser. These results show
complexity-theoretic barriers to extending the recent line of fine-grained
hardness results for CVP and SVP to larger approximation factors. (This result
also extends to arbitrary norms.)
4. We show a -time co-non-deterministic protocol for
-approximate SVP, generalizing the (also celebrated!)
polynomial-time protocol for -CVP due to Aharonov and Regev.
5. We give a novel coMA protocol for -approximate CVP with a
-time verifier.
All of the results described above are special cases of more general theorems
that achieve time-approximation factor tradeoffs
Machine-learned climate model corrections from a global storm-resolving model
Due to computational constraints, running global climate models (GCMs) for
many years requires a lower spatial grid resolution ( km) than is
optimal for accurately resolving important physical processes. Such processes
are approximated in GCMs via subgrid parameterizations, which contribute
significantly to the uncertainty in GCM predictions. One approach to improving
the accuracy of a coarse-grid global climate model is to add machine-learned
state-dependent corrections at each simulation timestep, such that the climate
model evolves more like a high-resolution global storm-resolving model (GSRM).
We train neural networks to learn the state-dependent temperature, humidity,
and radiative flux corrections needed to nudge a 200 km coarse-grid climate
model to the evolution of a 3~km fine-grid GSRM. When these corrective ML
models are coupled to a year-long coarse-grid climate simulation, the time-mean
spatial pattern errors are reduced by 6-25% for land surface temperature and
9-25% for land surface precipitation with respect to a no-ML baseline
simulation. The ML-corrected simulations develop other biases in climate and
circulation that differ from, but have comparable amplitude to, the baseline
simulation
- âŠ