169 research outputs found

    Non-factive Understanding: A Statement and Defense

    Get PDF
    In epistemology and philosophy of science, there has been substantial debate about truth’s relation to understanding. “Non-factivists” hold that radical departures from the truth are not always barriers to understanding; “quasi-factivists” demur. The most discussed example concerns scientists’ use of idealizations in certain derivations of the ideal gas law from statistical mechanics. Yet, these discussions have suffered from confusions about the relevant science, as well as conceptual confusions. Addressing this example, we shall argue that the ideal gas law is best interpreted as favoring non-factivism about understanding, but only after delving a bit deeper into the statistical mechanics that has informed these arguments and stating more precisely what non-factivism entails. Along the way, we indicate where earlier discussions have gone astray, and highlight how a naturalistic approach furnishes more nuanced normative theses about the interaction of rationality, understanding, and epistemic value

    Non-Factive Understanding: A Statement and Defense

    Get PDF
    In epistemology and philosophy of science, there has been substantial debate about truth’s relation to understanding. “Non-factivists” hold that radical departures from the truth are not always barriers to understanding; “quasi-factivists” demur. The most discussed example concerns scientists’ use of idealizations in certain derivations of the ideal gas law from statistical mechanics. Yet, these discussions have suffered from confusions about the relevant science, as well as conceptual confusions. To that end, we shall argue that the ideal gas law is best interpreted as favoring non-factivism about understanding, but only after delving a bit deeper into the statistical mechanics that has informed these arguments and stating more precisely what non-factivism entails. Along the way, we indicate where earlier discussions have gone astray, and highlight how a naturalistic approach furnishes more nuanced normative theses about the interaction of rationality, understanding, and epistemic value

    The (Im)possibility of Simple Search-To-Decision Reductions for Approximation Problems

    Get PDF
    We study the question of when an approximate search optimization problem is harder than the associated decision problem. Specifically, we study a natural and quite general model of black-box search-to-decision reductions, which we call branch-and-bound reductions (in analogy with branch-and-bound algorithms). In this model, an algorithm attempts to minimize (or maximize) a function f: D ? ?_{? 0} by making oracle queries to h_f : ? ? ?_{? 0} satisfying min_{x ? S} f(x) ? h_f(S) ? ? ? min_{x ? S} f(x) (*) for some ? ? 1 and any subset S in some allowed class of subsets ? of the domain D. (When the goal is to maximize f, h_f instead yields an approximation to the maximal value of f over S.) We show tight upper and lower bounds on the number of queries q needed to find even a ?\u27-approximate minimizer (or maximizer) for quite large ?\u27 in a number of interesting settings, as follows. - For arbitrary functions f : {0,1}? ? ?_{? 0}, where ? contains all subsets of the domain, we show that no branch-and-bound reduction can achieve ?\u27 ? ?^{n/log q}, while a simple greedy approach achieves essentially ?^{n/log q}. - For a large class of MAX-CSPs, where ? := {S_w} contains each set of assignments to the variables induced by a partial assignment w, we show that no branch-and-bound reduction can do significantly better than essentially a random guess, even when the oracle h_f guarantees an approximation factor of ? ? 1+?{log(q)/n}. - For the Traveling Salesperson Problem (TSP), where ? := {S_p} contains each set of tours extending a path p, we show that no branch-and-bound reduction can achieve ?\u27 ? (?-1) n/log q. We also prove a nearly matching upper bound in our model. These results show an oracle model in which approximate search and decision are strongly separated. (In particular, our result for TSP can be viewed as a negative answer to a question posed by Bellare and Goldwasser (SIAM J. Comput. 1994), though only in an oracle model.) We also note two alternative interpretations of our results. First, if we view h_f as a data structure, then our results unconditionally rule out black-box search-to-decision reductions for certain data structure problems. Second, if we view h_f as an efficiently computable heuristic, then our results show that any reasonably efficient branch-and-bound algorithm requires more guarantees from its heuristic than simply Eq. (*). Behind our results is a "useless oracle lemma," which allows us to argue that under certain conditions the oracle h_f is "useless," and which might be of independent interest. See also the full version [Alexander Golovnev et al., 2022]

    Improving the predictions of ML-corrected climate models with novelty detection

    Full text link
    While previous works have shown that machine learning (ML) can improve the prediction accuracy of coarse-grid climate models, these ML-augmented methods are more vulnerable to irregular inputs than the traditional physics-based models they rely on. Because ML-predicted corrections feed back into the climate model's base physics, the ML-corrected model regularly produces out of sample data, which can cause model instability and frequent crashes. This work shows that adding semi-supervised novelty detection to identify out-of-sample data and disable the ML-correction accordingly stabilizes simulations and sharply improves the quality of predictions. We design an augmented climate model with a one-class support vector machine (OCSVM) novelty detector that provides better temperature and precipitation forecasts in a year-long simulation than either a baseline (no-ML) or a standard ML-corrected run. By improving the accuracy of coarse-grid climate models, this work helps make accurate climate models accessible to researchers without massive computational resources.Comment: Appearing at Tackling Climate Change with Machine Learning Workshop at NeurIPS 202

    Non-Factive Understanding: A Statement and Defense

    Get PDF
    In epistemology and philosophy of science, there has been substantial debate about truth’s relation to understanding. “Non-factivists” hold that radical departures from the truth are not always barriers to understanding; “quasi-factivists” demur. The most discussed example concerns scientists’ use of idealizations in certain derivations of the ideal gas law from statistical mechanics. Yet, these discussions have suffered from confusions about the relevant science, as well as conceptual confusions. To that end, we shall argue that the ideal gas law is best interpreted as favoring non-factivism about understanding, but only after delving a bit deeper into the statistical mechanics that has informed these arguments and stating more precisely what non-factivism entails. Along the way, we indicate where earlier discussions have gone astray, and highlight how a naturalistic approach furnishes more nuanced normative theses about the interaction of rationality, understanding, and epistemic value

    Lattice Problems Beyond Polynomial Time

    Full text link
    We study the complexity of lattice problems in a world where algorithms, reductions, and protocols can run in superpolynomial time, revisiting four foundational results: two worst-case to average-case reductions and two protocols. We also show a novel protocol. 1. We prove that secret-key cryptography exists if O~(n)\widetilde{O}(\sqrt{n})-approximate SVP is hard for 2Δn2^{\varepsilon n}-time algorithms. I.e., we extend to our setting (Micciancio and Regev's improved version of) Ajtai's celebrated polynomial-time worst-case to average-case reduction from O~(n)\widetilde{O}(n)-approximate SVP to SIS. 2. We prove that public-key cryptography exists if O~(n)\widetilde{O}(n)-approximate SVP is hard for 2Δn2^{\varepsilon n}-time algorithms. This extends to our setting Regev's celebrated polynomial-time worst-case to average-case reduction from O~(n1.5)\widetilde{O}(n^{1.5})-approximate SVP to LWE. In fact, Regev's reduction is quantum, but ours is classical, generalizing Peikert's polynomial-time classical reduction from O~(n2)\widetilde{O}(n^2)-approximate SVP. 3. We show a 2Δn2^{\varepsilon n}-time coAM protocol for O(1)O(1)-approximate CVP, generalizing the celebrated polynomial-time protocol for O(n/log⁥n)O(\sqrt{n/\log n})-CVP due to Goldreich and Goldwasser. These results show complexity-theoretic barriers to extending the recent line of fine-grained hardness results for CVP and SVP to larger approximation factors. (This result also extends to arbitrary norms.) 4. We show a 2Δn2^{\varepsilon n}-time co-non-deterministic protocol for O(log⁥n)O(\sqrt{\log n})-approximate SVP, generalizing the (also celebrated!) polynomial-time protocol for O(n)O(\sqrt{n})-CVP due to Aharonov and Regev. 5. We give a novel coMA protocol for O(1)O(1)-approximate CVP with a 2Δn2^{\varepsilon n}-time verifier. All of the results described above are special cases of more general theorems that achieve time-approximation factor tradeoffs

    Machine-learned climate model corrections from a global storm-resolving model

    Full text link
    Due to computational constraints, running global climate models (GCMs) for many years requires a lower spatial grid resolution (≳50{\gtrsim}50 km) than is optimal for accurately resolving important physical processes. Such processes are approximated in GCMs via subgrid parameterizations, which contribute significantly to the uncertainty in GCM predictions. One approach to improving the accuracy of a coarse-grid global climate model is to add machine-learned state-dependent corrections at each simulation timestep, such that the climate model evolves more like a high-resolution global storm-resolving model (GSRM). We train neural networks to learn the state-dependent temperature, humidity, and radiative flux corrections needed to nudge a 200 km coarse-grid climate model to the evolution of a 3~km fine-grid GSRM. When these corrective ML models are coupled to a year-long coarse-grid climate simulation, the time-mean spatial pattern errors are reduced by 6-25% for land surface temperature and 9-25% for land surface precipitation with respect to a no-ML baseline simulation. The ML-corrected simulations develop other biases in climate and circulation that differ from, but have comparable amplitude to, the baseline simulation
    • 

    corecore