596 research outputs found
Evaluating strong longtermism
Roughly, strong longtermism (Greaves and MacAskill 2021) is the view that in the most important decision-situations facing us today, the option that is ex ante best, and the one we ought to choose, is the option that makes the far future go best. The purpose of this thesis is to evaluate strong longtermism. I do this by first considering what I take to be three important objections to this view, and then suggesting a way in which the strong longtermist may be able to respond to them.
The thesis consists of five chapters. In Chapter 1, I introduce the topic of the thesis and reconstruct Greaves and MacAskill’s argument for strong longtermism. In Chapter 2, I argue that partially aggregative and non-aggregative moral views form a significant objection to Greaves and MacAskill’s argument for deontic strong longtermism. In Chapter 3, I discuss the procreative asymmetry, arguing that what I call the Purely Deontic Asymmetry forms another important objection to strong longtermism. In Chapter 4, I consider the problem of fanaticism, arguing that the best way those in favour of strong longtermism can avoid this problem is by adopting a view called tail discounting. Finally, in Chapter 5, I propose that the issues discussed in the preceding chapters can be satisfactorily dealt with by framing strong longtermism as a public philosophy. This means that we should understand strong longtermism as a view that correctly describes what state-level actors ought to do, rather than as a blueprint for individual morality.
If my evaluation is correct, then there are important limits to the role that strong longtermism can play in our private lives. However, it also implies that as a society, we ought to do much more than we currently do to safeguard the long-term future of humanity
Exceeding Expectations: Stochastic Dominance as a General Decision Theory
The principle that rational agents should maximize expected utility or choiceworthiness is intuitively plausible in many ordinary cases of decision-making under uncertainty. But it is less plausible in cases of extreme, low-probability risk (like Pascal's Mugging), and intolerably paradoxical in cases like the St. Petersburg and Pasadena games. In this paper I show that, under certain conditions, stochastic dominance reasoning can capture most of the plausible implications of expectational reasoning while avoiding most of its pitfalls. Specifically, given sufficient background uncertainty about the choiceworthiness of one's options, many expectation-maximizing gambles that do not stochastically dominate their alternatives "in a vacuum" become stochastically dominant in virtue of that background uncertainty. But, even under these conditions, stochastic dominance will not require agents to accept options whose expectational superiority depends on sufficiently small probabilities of extreme payoffs. The sort of background uncertainty on which these results depend looks unavoidable for any agent who measures the choiceworthiness of her options in part by the total amount of value in the resulting world. At least for such agents, then, stochastic dominance offers a plausible general principle of choice under uncertainty that can explain more of the apparent rational constraints on such choices than has previously been recognized
The Freedom of Future People
What happens to liberal political philosophy, if we consider not only the freedom of present but also future people? In this article, I explore the case for long-term liberalism: freedom should be a central goal, and we should often be particularly concerned with effects on long-term future distributions of freedom. I provide three arguments. First, liberals should be long-term liberals: liberal arguments to value freedom give us reason to be (particularly) concerned with future freedom, including freedom in the far future. Second, longtermists should be liberals, particularly under conditions of empirical and moral uncertainty. Third, long-term liberalism plausibly justifies some restrictions on the freedom of existing people to secure the freedom of future people, for example when mitigating climate change. At the same time, it likely avoids excessive trade-offs: for both empirical and philosophical reasons, long-term and near-term freedom show significant convergence. Throughout I also highlight important practical implications, for example on longtermist institutional action, climate change, human extinction, demography, and global catastrophic risks.</p
High risk, low reward: a challenge to the astronomical value of existential risk mitigation
Many philosophers defend two claims: the astronomical value thesis that it is
astronomically important to mitigate existential risks to humanity, and existential risk
pessimism, the claim that humanity faces high levels of existential risk. It is natural to think
that existential risk pessimism supports the astronomical value thesis. In this paper, I argue
that precisely the opposite is true. Across a range of assumptions, existential risk
pessimism significantly reduces the value of existential risk mitigation, so much so that
pessimism threatens to falsify the astronomical value thesis. I argue that the best way to
reconcile existential risk pessimism with the astronomical value thesis relies on a
questionable empirical assumption. I conclude by drawing out philosophical implications
of this discussion, including a transformed understanding of the demandingness objection
to consequentialism, reduced prospects for ethical longtermism, and a diminished moral
importance of existential risk mitigation
Deep Time and Microtime:Anthropocene Temporalities and Silicon Valley's Longtermist Scope
Living in Anthropocene times entails living in relation to two seemingly separate temporalities – the microtime of digital operations and the deep time of geological upheaval. Though divergent, these temporalities are united by their unavailability to perception; microtime proceeds too fast to perceive directly, while deep time is too vast to apprehend. Taking these temporalities as a point of departure, this paper develops three arguments. First, it asserts that the temporalities of deep time and microtime increasingly impact contemporary existence, complicating familiar categorizations of temporal experience. Second, it argues that these ostensibly separate temporalities are ontologically connected through the operations of the tech industry, which is constructing a microtemporal system that extracts the planet’s deep time resources to delimit the future both materially and cognitively. Third, it suggests that Silicon Valley legitimizes these processes by funding the philosophy of longtermism, which appeals to distant timescales to marginalize injustices in the present
Don't slip into binary thinking about AI
In discussions about the development and governance of AI, a false binary is
often drawn between two groups: those most concerned about the existing, social
impacts of AI, and those most concerned about possible future risks of powerful
AI systems taking actions that don't align with human interests. In this piece,
we (i) describe the emergence of this false binary, (ii) explain why the
seemingly clean distinctions drawn between these two groups don't hold up under
scrutiny and (iii) highlight efforts to bridge this divide.Comment: 19 page
A Problem Best Put Off Until Tomorrow
Effective Altruism has led a recent renaissance for utilitarian theory. However, it seems that despite its surge in popularity, Effective Altruism is still vulnerable to many of the critiques that plague utilitarianism. The most significant amongst these is the utility monster. I use Longtermsim, a mode of thinking that has evolved from Effective Altruism and prioritizes the far-future over the present in decision-making processes, as an example of how the unborn millions of the future might constitute a utility monster as a corporate mass. I investigate three main avenues of resolving the utility monster objection to Effective Altruism: reconsidering the use of expected value, adopting temporal discounting, and adopting average utilitarianism. I demonstrate that at best there are significant problems with these responses, and at worst, they completely fail to resolve the utility monster objection. I then conclude that if situations do exist in which the costs to the present do not intuitively justify the benefits to the far future, we must reject utilitarianism altogether
Climate Adaptation in Norway
This thesis explores the governmental climate adaptation practices in Norway. The degree to which Norway’s climate adaptation practices incorporate resilience theory is central to the discussion. Universal challenges that hinder effective climate adaptation are also examined. The ways in which the challenges create barriers for the realization of longtermism ideals in climate adaptation is also included in the analysis.B-D
- …
