4,190 research outputs found
Defining 'Speech': Subtraction, Addition, and Division
In free speech theory ‘speech’ has to be defined as a special term of art. I argue that much free speech discourse comes with a tacit commitment to a ‘Subtractive Approach’ to defining speech. As an initial default, all communicative acts are assumed to qualify as speech, before exceptions are made to ‘subtract’ those acts that don’t warrant the special legal protections owed to ‘speech’. I examine how different versions of the Subtractive Approach operate, and criticise them in terms of their ability to yield a substantive definition of speech which covers all and only those forms of communicative action that – so our arguments for free speech indicate – really do merit special legal protection. In exploring alternative definitional approaches, I argue that what ultimately compromises definitional adequacy in this arena is a theoretical commitment to the significance of a single unified class of privileged communicative acts. I then propose an approach to free speech theory that eschews this theoretical commitment
Super Soldiers and Technological Asymmetry
In this chapter I argue that emerging soldier enhancement technologies have the potential to transform the ethical character of the relationship between combatants, in conflicts between ‘Superpower’ militaries, with the ability to deploy such technologies, and technologically disadvantaged ‘Underdog’ militaries. The reasons for this relate to Paul Kahn’s claims about the paradox of riskless warfare. When an Underdog poses no threat to a Superpower, the standard just war theoretic justifications for the Superpower’s combatants using lethal violence against their opponents breaks down. Therefore, Kahn argues, combatants in that position must approach their opponents in an ethical guise relevantly similar to ‘policing’. I argue that the kind of disparities in risk and threat between opposing combatants that Kahn’s analysis posits, don’t obtain in the context of face-to-face combat, in the way they would need to in order to support his ethical conclusions about policing. But then I argue that soldier enhancement technologies have the potential to change this, in a way that reactivates the force of those conclusions
Moral Renegades
This piece is a side-by-side review of two books: Strangers Drowning, by Larissa MacFarquhar, and Doing Good Better, by William MacAskill. Both books are concerned with the question of whether we should try to live as morally good a life as possible. MacAskill thinks the answer is 'yes', and his book is an overview of how the Effective Altruist movement approaches the problem of how to achieve a morally optimal life. MacFarquhar's book is a more descriptive account of the lives of people who aim to live in a morally optimal way. Her discussion is nuanced, and somewhat ambivalent about the merits of this aim. My review brings out some commonalities and differences between the two books, and critically digests the arguments on offer
Dehumanization: its Operations and its Origins
Gail Murrow and Richard Murrow offer a novel account of dehumanization, by synthesizing data which suggest that where subject S has a dehumanized view of group G, S‘s neural mechanisms of empathy show a dampened response to the suffering of members of G, and S‘s judgments about the humanity of members of G are largely non-conscious. Here I examine Murrow and Murrow‘s suggestions about how identity-based hate speech bears responsibility for dehumanization in the first place. I identify a distinction between (i) accounts of the nature of the harm effected by identity prejudice, and (ii) accounts of how hate speech contributes to the harms of identity prejudice. I then explain why Murrow and Murrow‘s proposal is more aptly construed as an account of type (i), and explain why accounts of this type, even if they‘re plausible and evidentially well-supported, have limited implications in relation to justifications for anti-hate speech law
No Platforming
This paper explains how the practice of ‘no platforming’ can be reconciled with a liberal politics. While opponents say that no platforming flouts ideals of open public discourse, and defenders see it as a justifiable harm-prevention measure, both sides mistakenly treat the debate like a run-of-the-mill free speech conflict, rather than an issue of academic freedom specifically. Content-based restrictions on speech in universities are ubiquitous. And this is no affront to a liberal conception of academic freedom, whose purpose isn’t just to protect the speech of academics, but also to give them the prerogative to determine which views and speakers have sufficient disciplinary credentials to receive a hearing in academic contexts. No platforming should therefore be acceptable to liberals, in principle, in cases where it is used to support a university culture that maintains rigorous disciplinary standards, by denying attention and credibility to speakers without appropriate disciplinary credentials
Tolerating Hate in the Name of Democracy
This article offers a comprehensive and critical analysis of Eric Heinze’s book Hate Speech and Democratic Citizenship (Oxford University Press, 2016). Heinze’s project is to formulate and defend a more theoretically complex version of the idea (also defended by people like Ronald Dworkin and James Weinstein) that general legal prohibitions on hate speech in public discourse compromises the state’s democratic legitimacy. We offer a detailed synopsis of Heinze’s view, highlighting some of its distinctive qualities and strengths. We then develop a critical response to this view with three main focal points: (1) the characterisation of democratic legitimacy as something distinct from (and whose demands aren’t identical with those of) legitimacy per se; (2) the claim that the requirements of democracy are hypothetical, rather than categorical, imperatives; and relatedly (3) the question of how we should reconcile the requirements of democratic legitimacy with the costs that may follow from prioritising democratic legitimacy. We argue that there are significant difficulties for Heinze’s account on all three fronts
Thermal-infrared imaging of 3C radio galaxies at z~1
We present the results of a programme of thermal-IR imaging of nineteen z~1
radio galaxies from the 3CR and 3CRR samples. We detect emission at L' (3.8um)
from four objects; in each case the emission is unresolved at 1" resolution.
Fifteen radio galaxies remain undetected to sensitive limits of L'~15.5. Using
these data in tandem with archived HST data and near-IR spectroscopy we show
that 3 of the detected `radio galaxies' (3C22, 3C41, and 3C65) harbour quasars
reddened by Av<5. Correcting for this reddening 3C22 and 3C41 are very similar
to coeval 3C quasars, whilst 3C65 seems unusually underluminous. The fourth
radio galaxy detection (3C265) is a more highly obscured (Av~15) but otherwise
typical quasar which previously has been evident only in scattered light. We
determine the fraction of dust-reddened quasars at z~1 to be 28(+25)(-13)% at
90% confidence. On the assumption that the undetected radio galaxies harbour
quasars similar to those in 3C22, 3C41 and 3C265 (as seems reasonable given
their similar narrow emission line luminosities) we deduce extinctions of Av>15
towards their nuclei. The contributions of reddened quasar nuclei to the total
K-band light ranges from ~0 per cent for the non-detections, through ~10 per
cent for 3C265 to ~80 per cent for 3C22 and 3C41. Correcting for these effects
does not remove the previously reported differences between the K magnitudes of
3C and 6C radio galaxies, so contamination by reddened quasar nuclei is not a
serious problem for drawing cosmological conclusions from the K-z relation for
radio galaxies. We discuss these results in the context of the `receding torus'
model which predicts a small fraction of lightly-reddened quasars in samples of
high radio luminosity sources. We also examine the likely future importance of
thermal-IR imaging in the study of distant powerful radio sources.Comment: 17 pages incl 14 figures, accepted by MNRA
Climate Change, Cooperation, and Moral Bioenhancement
The human faculty of moral judgment is not well suited to address problems, like climate change, that are global in scope and remote in time. Advocates of ‘moral bioenhancement’ have proposed that we should investigate the use of medical technologies to make human beings more trusting and altruistic, and hence more willing to cooperate in efforts to mitigate the impacts of climate change. We survey recent accounts of the proximate and ultimate causes of human cooperation in order to assess the prospects for bioenhancement. We identify a number of issues that are likely to be significant obstacles to effective bioenhancement, as well as areas for future research
- …