13 research outputs found

    Journal Staff

    Get PDF
    Essay I explores brain machine interface (BMI) technologies. These make direct communication between the brain and a machine possible by means of electrical stimuli. This essay reviews the existing and emerging technologies in this field and offers an inquiry into the ethical problems that are likely to emerge. Essay II, co-written with professor Sven-Ove Hansson, presents a novel procedure to engage the public in deliberations on the potential impacts of technology.  This procedure, convergence seminar, is a form of scenario-based discussion that is founded on the idea of hypothetical retrospection. The theoretical background and the results of the five seminars are presented. Essay III discusses moral bioenhancement, an instance of human enhancement that alters a person’s dispositions, emotions or behavior. Moral bioenhancement could be carried out in three different ways. The first strategy is behavioral enhancement. The second strategy, favored by prominent defenders of moral enhancement, is emotional enhancement. The third strategy is the enhancement of moral dispositions, such as empathy and inequity aversion. I argue that we ought to implement a combination of the second and third strategies. Essay IV considers the possibility and potential desirability of sensory enhancement. It is proposed that existing sensory modalities in vertebrate animals are proof of concept of what is biologically possible to create in humans. Three considerations on the normative aspects of sensory enhancement are also presented in this essay. Essay V rejects disease prioritarianism, the idea that the healthcare system ought to prioritize the treatment of diseases. Instead, an approach that focuses on what medicine can accomplish is proposed. Essay VI argues that from the idea that species have an intrinsic value and that humanity has a collective responsibility to protect animal species from extinction, the conclusion that we ought to recreate species follows. Essay VII argues that unknown existential risks have not been properly addressed. It proposes a heuristic for doing so, and a concrete strategy. This strategy consists in building refuges that could withstand a large number of catastrophic events.  QC 20141204</p

    Existential Risks: Exploring a Robust Risk Reduction Strategy

    Get PDF

    Artificial superintelligence and its limits: why AlphaZero cannot become a general agent

    Get PDF
    An intelligent machine surpassing human intelligence across a wide set of skills has been proposed as a possible existential catastrophe (i.e., an event comparable in value to that of human extinction). Among those concerned about existential risk related to Artificial Intelligence (AI), it is common to assume that AI will not only be very intelligent, but also be a general agent (i.e., an agent capable of action in many different contexts). This article explores the characteristics of machine agency, and what it would mean for a machine to become a general agent. In particular, it does so by articulating some important differences between belief and desire in the context of machine agency. One such difference is that while an agent can by itself acquire new beliefs through learning, desires need to be derived from preexisting desires or acquired with the help of an external influence. Such influence could be a human programmer or natural selection. We argue that to become a general agent, a machine needs productive desires, or desires that can direct behavior across multiple contexts. However, productive desires cannot sui generis be derived from non-productive desires. Thus, even though general agency in AI could in principle be created, it cannot be produced by an AI spontaneously through an endogenous process. In conclusion, we argue that a common AI scenario, where general agency suddenly emerges in a non-general agent AI, is not plausible

    The evolving SARS-CoV-2 epidemic in Africa: Insights from rapidly expanding genomic surveillance

    Get PDF
    INTRODUCTION Investment in Africa over the past year with regard to severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) sequencing has led to a massive increase in the number of sequences, which, to date, exceeds 100,000 sequences generated to track the pandemic on the continent. These sequences have profoundly affected how public health officials in Africa have navigated the COVID-19 pandemic. RATIONALE We demonstrate how the first 100,000 SARS-CoV-2 sequences from Africa have helped monitor the epidemic on the continent, how genomic surveillance expanded over the course of the pandemic, and how we adapted our sequencing methods to deal with an evolving virus. Finally, we also examine how viral lineages have spread across the continent in a phylogeographic framework to gain insights into the underlying temporal and spatial transmission dynamics for several variants of concern (VOCs). RESULTS Our results indicate that the number of countries in Africa that can sequence the virus within their own borders is growing and that this is coupled with a shorter turnaround time from the time of sampling to sequence submission. Ongoing evolution necessitated the continual updating of primer sets, and, as a result, eight primer sets were designed in tandem with viral evolution and used to ensure effective sequencing of the virus. The pandemic unfolded through multiple waves of infection that were each driven by distinct genetic lineages, with B.1-like ancestral strains associated with the first pandemic wave of infections in 2020. Successive waves on the continent were fueled by different VOCs, with Alpha and Beta cocirculating in distinct spatial patterns during the second wave and Delta and Omicron affecting the whole continent during the third and fourth waves, respectively. Phylogeographic reconstruction points toward distinct differences in viral importation and exportation patterns associated with the Alpha, Beta, Delta, and Omicron variants and subvariants, when considering both Africa versus the rest of the world and viral dissemination within the continent. Our epidemiological and phylogenetic inferences therefore underscore the heterogeneous nature of the pandemic on the continent and highlight key insights and challenges, for instance, recognizing the limitations of low testing proportions. We also highlight the early warning capacity that genomic surveillance in Africa has had for the rest of the world with the detection of new lineages and variants, the most recent being the characterization of various Omicron subvariants. CONCLUSION Sustained investment for diagnostics and genomic surveillance in Africa is needed as the virus continues to evolve. This is important not only to help combat SARS-CoV-2 on the continent but also because it can be used as a platform to help address the many emerging and reemerging infectious disease threats in Africa. In particular, capacity building for local sequencing within countries or within the continent should be prioritized because this is generally associated with shorter turnaround times, providing the most benefit to local public health authorities tasked with pandemic response and mitigation and allowing for the fastest reaction to localized outbreaks. These investments are crucial for pandemic preparedness and response and will serve the health of the continent well into the 21st century

    Human enhancement and technological uncertainty : Essays on the promise and peril of emerging technology

    No full text
    Essay I explores brain machine interface (BMI) technologies. These make direct communication between the brain and a machine possible by means of electrical stimuli. This essay reviews the existing and emerging technologies in this field and offers an inquiry into the ethical problems that are likely to emerge. Essay II, co-written with professor Sven-Ove Hansson, presents a novel procedure to engage the public in deliberations on the potential impacts of technology.  This procedure, convergence seminar, is a form of scenario-based discussion that is founded on the idea of hypothetical retrospection. The theoretical background and the results of the five seminars are presented. Essay III discusses moral bioenhancement, an instance of human enhancement that alters a person’s dispositions, emotions or behavior. Moral bioenhancement could be carried out in three different ways. The first strategy is behavioral enhancement. The second strategy, favored by prominent defenders of moral enhancement, is emotional enhancement. The third strategy is the enhancement of moral dispositions, such as empathy and inequity aversion. I argue that we ought to implement a combination of the second and third strategies. Essay IV considers the possibility and potential desirability of sensory enhancement. It is proposed that existing sensory modalities in vertebrate animals are proof of concept of what is biologically possible to create in humans. Three considerations on the normative aspects of sensory enhancement are also presented in this essay. Essay V rejects disease prioritarianism, the idea that the healthcare system ought to prioritize the treatment of diseases. Instead, an approach that focuses on what medicine can accomplish is proposed. Essay VI argues that from the idea that species have an intrinsic value and that humanity has a collective responsibility to protect animal species from extinction, the conclusion that we ought to recreate species follows. Essay VII argues that unknown existential risks have not been properly addressed. It proposes a heuristic for doing so, and a concrete strategy. This strategy consists in building refuges that could withstand a large number of catastrophic events.  QC 20141204</p

    Crucial Considerations: Essays on the Ethics of Emerging Technologies

    No full text
    Essay I explores brain machine interface (BMI) technologies. These make direct communication between the brain and a machine possible by means of electrical stimuli. This essay reviews the existing and emerging technologies in this field and offers a systematic inquiry into the relevant ethical problems that are likely to emerge in the following decades. Essay II, co-written with professor Sven-Ove Hansson, presents a novel procedure to engage the public in ethical deliberations on the potential impacts of brain machine interface technology. We call this procedure a Convergence seminar, a form of scenario-based group discussion that is founded on the idea of hypothetical retrospection. The theoretical background of this procedure and the results of the five seminars are presented here. Essay III discusses moral enhancement, an instance of human enhancement that alters a person’s dispositions, emotions or behavior in order to make that person more moral. Moral enhancement could be carried out in three different ways. The first strategy is behavioral enhancement. The second strategy, favored by prominent defenders of moral enhancement, is emotional enhancement. The third strategy is the enhancement of moral dispositions, such as empathy and inequity aversion. I argue that we ought to implement a combination of the second and third strategies. QC 20121206</p

    Ecocentrism and biosphere life extension

    No full text
    The biosphere represents the global sum of all ecosystems. According to a prominent view in environmental ethics, ecocentrism, these ecosystems matter for their own sake, and not only because they contribute to human ends. As such, some ecocentrists are critical of the modern industrial civilization, and a few even argue that an irreversible collapse of the modern industrial civilization would be a good thing. However, taking a longer view and considering the eventual destruction of the biosphere by astronomical processes, we argue that humans, a species with considerable technological know-how and industrial capacity could intervene to extend the lifespan of Earth’s biosphere, perhaps by several billion years. We argue that human civilization, despite its flaws and harmful impacts on many ecosystems, is the biosphere’s best hope of avoiding premature destruction. We argue that proponents of ecocentrism, even those who wholly disregard anthropocentric values, have a strong moral reason preserve the modern industrial civilization, for as long as needed to ensure biosphere survival

    Le libre-arbitre dans l'interprétation d'Everett de la mécanique quantique

    No full text
    International audienceAccording to the contemporary Everett interpretation of quantum mechanics, when an event happens, all possible alternative events compatible with the laws of quantum mechanics also happen, each in a different world in our Universe. We consider here the consequences of this interpretation on the free will problem. We show that two classical conditions are solved when accepting this interpretation: the ability to do otherwise; and the capacity of antecedent determining control on freely willed actions. Our demonstration rests on Ted Sider's stage theory, and on an analysis of modality inspired by David Lewis counterpart theory. The Everett interpretation can therefore give a new life to the compatibilist project that aims at showing the compatibility of determinism with free will.Selon la version contemporaine, « multi-mondes », de l’interprétation d’Everett de la mécanique quantique, lorsqu’un événement se produit tous les événements alternatifs compatibles avec les lois de la mécanique quantique se produisent également, chacun dans un monde différent. Cet article montre que si l’on accepte cette interprétation, alors deux principes nécessaires à l’existence du libre-arbitre sont respectés : le principe des possibilités alternatives, qui affirme que si un individu a agi par libre arbitre, il aurait pu agir autrement ; et le principe de contrôle de détermination antécédent. Notre démonstration repose sur la théorie des phases de Ted Sider et une analyse de la modalité inspirée de la théorie des contreparties de David Lewis. L’interprétation d’Everett apporte ainsi un souffle nouveau au projet compatibiliste qui vise à démontrer la compatibilité du déterminisme et du libre-arbitre

    Artificial intelligence and democratic legitimacy : The problem of publicity in public authority

    No full text
    Machine learning algorithms (ML) are increasingly used to support decision-making in the exercise of public authority. Here, we argue that an important consideration has been overlooked in previous discussions: whether the use of ML undermines the democratic legitimacy of public institutions. From the perspective of democratic legitimacy, it is not enough that ML contributes to efficiency and accuracy in the exercise of public authority, which has so far been the focus in the scholarly literature engaging with these developments. According to one influential theory, exercises of administrative and judicial authority are democratically legitimate if and only if administrative and judicial decisions serve the ends of the democratic law maker, are based on reasons that align with these ends and are accessible to the public. These requirements are not satisfied by decisions determined through ML since such decisions are determined by statistical operations that are opaque in several respects. However, not all ML-based decision support systems pose the same risk, and we argue that a considered judgment on the democratic legitimacy of ML in exercises of public authority need take the complexity of the issue into account. This paper outlines considerations that help guide the assessment of whether a ML undermines democratic legitimacy when used to support public decisions. We argue that two main considerations are pertinent to such normative assessment. The first is the extent to which ML is practiced as intended and the extent to which it replaces decisions that were previously accessible and based on reasons. The second is that uses of ML in exercises of public authority should be embedded in an institutional infrastructure that secures reason giving and accessibility

    Rate coefficients and kinetic isotope effects of the abstraction reaction of H atoms from methylsilane

    No full text
    International audienceThermal rate constants for chemical reactions using improved canonical variational transition state theory (ICVT) with small-curvature tunnelling (SCT) contributions in a temperature range 180-2000 K are reported. The general procedure is used with high-quality ab initio computations and semi-classical reaction probabilities along the minimum energy path (MEP). The approach is based on a vibrational adiabatic reaction path and is applied to the multiple-channel hydrogen abstraction reaction H + SiH3CH3 → products and its isotopically substituted variants. All the degrees of freedom are optimised and harmonic vibrational frequencies and zero-point energies are calculated at the MP2 level with the cc-pVTZ basis set. Single-point energies are calculated at a higher level of theory; CCSD(T)-F12a/VTZ-F12. ICVT/SCT rate constants show that the quantum tunnelling contributions at low temperatures are relatively important and the H-abstraction channel from SiH3 group of SiH3CH3 is the major pathway. The total rate constants are given by the following expression: ktot(ICVT/SCT) = 2.29 10-18 T2.42 exp(-350.9/T) cm3 molec-1 s-1. These calculated rates are in agreement with the available experiments. The ICVT/SCT method is further exploited to predict primary and secondary kinetic isotope effects, respectively)
    corecore