390 research outputs found

    Sims and Vulnerability: On the Ethics of Creating Emulated Minds

    Get PDF
    It might become possible to build artificial minds with the capacity for experience. This raises a plethora of ethical issues, explored, among others, in the context of whole brain emulations (WBE). In this paper, I will take up the problem of vulnerability – given, for various reasons, less attention in the literature – that the conscious emulations will likely exhibit. Specifically, I will examine the role that vulnerability plays in generating ethical issues that may arise when dealing with WBEs. I will argue that concerns about vulnerability are more matters of institutional design than individual ethics, both when it comes to creating humanlike brain emulations, and when animal-like emulations are concerned. Consequently, the article contains reflection on some institutional measures that can be taken to protect the sims' interests. It concludes that an institutional framework more likely to succeed in this task is competitive and poly-centric, rather than monopolistic and centralized

    EEG correlates of social interaction at distance

    Get PDF
    This study investigated EEG correlates of social interaction at distance between twenty-five pairs of participants who were not connected by any traditional channels of communication. Each session involved the application of 128 stimulations separated by intervals of random duration ranging from 4 to 6 seconds. One of the pair received a one-second stimulation from a light signal produced by an arrangement of red LEDs, and a simultaneous 500 Hz sinusoidal audio signal of the same length. The other member of the pair sat in an isolated sound-proof room, such that any sensory interaction between the pair was impossible. An analysis of the Event-Related Potentials associated with sensory stimulation using traditional averaging methods showed a distinct peak at approximately 300 ms, but only in the EEG activity of subjects who were directly stimulated. However, when a new algorithm was applied to the EEG activity based on the correlation between signals from all active electrodes, a weak but robust response was also detected in the EEG activity of the passive member of the pair, particularly within 9 – 10 Hz in the Alpha range. Using the Bootstrap method and the Monte Carlo emulation, this signal was found to be statistically significant

    Singularity Hypotheses: An Overview

    Get PDF
    Bill Joy in a widely read but controversial article claimed that the most powerful 21st century technologies are threatening to make humans an endangered species. Indeed, a growing number of scientists, philosophers and forecasters insist that the accelerating progress in disruptive technologies such as artificial intelligence, robotics, genetic engineering, and nanotechnology may lead to what they refer to as the technological singularity: an event or phase that will radically change human civilization, and perhaps even human nature itself, before the middle of the 21st century

    Extending perspective taking to nonhuman animals and artificial entities

    Get PDF
    Perspective taking can have positive effects in a range of intergroup contexts. In two experiments, we tested whether these effects generalize to two yet-to-be-studied nonhuman groups: animals and intelligent artificial entities. We found no overall effects of either taking the perspective of a farmed pig or an artificial entity on moral attitudes, compared to instructions to stay objective and a neutral condition. However, in both studies, mediation analysis indicated that perspective taking positively affected moral attitudes via empathic concern and self-other overlap, supporting two mechanisms well-established in the literature. The lack of overall effects may be partly explained by positive effects of staying objective on moral attitudes that offset the positive effects of perspective taking via empathic concern and self-other overlap. These findings suggest that perspective taking functions differently in the context of nonhuman groups relative to typical intergroup contexts. We consider this an important area for future research

    Personal Universes: A Solution to the Multi-Agent Value Alignment Problem

    Get PDF
    AI Safety researchers attempting to align values of highly capable intelligent systems with those of humanity face a number of challenges including personal value extraction, multi-agent value merger and finally in-silico encoding. State-of-the-art research in value alignment shows difficulties in every stage in this process, but merger of incompatible preferences is a particularly difficult challenge to overcome. In this paper we assume that the value extraction problem will be solved and propose a possible way to implement an AI solution which optimally aligns with individual preferences of each user. We conclude by analyzing benefits and limitations of the proposed approach

    Philosophy of Computer Science: An Introductory Course

    Get PDF
    There are many branches of philosophy called “the philosophy of X,” where X = disciplines ranging from history to physics. The philosophy of artificial intelligence has a long history, and there are many courses and texts with that title. Surprisingly, the philosophy of computer science is not nearly as well-developed. This article proposes topics that might constitute the philosophy of computer science and describes a course covering those topics, along with suggested readings and assignments

    Impact of alife simulation of Darwinian and Lamarckian evolutionary theories

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Information Systems and Technologies ManagementUntil nowadays, the scientific community firmly rejected the Theory of Inheritance of Acquired Characteristics, a theory mostly associated with the name of Jean-Baptiste Lamarck (1774-1829). Though largely dismissed when applied to biological organisms, this theory found its place in a young discipline called Artificial Life. Based on the two abstract models of Darwinian and Lamarckian evolutionary theories built using neural networks and genetic algorithms, this research aims to present a notion of the potential impact of implementation of Lamarckian knowledge inheritance across disciplines. In order to obtain our results, we conducted a focus group discussion between experts in biology, computer science and philosophy, and used their opinions as qualitative data in our research. As a result of completing the above procedure, we have found some implications of such implementation in each mentioned discipline. In synthetic biology, this means that we would engineer organisms precisely up to our specific needs. At the moment, we can think of better drugs, greener fuels and dramatic changes in chemical industry. In computer science, Lamarckian evolutionary algorithms have been used for quite some years, and quite successfully. However, their application in strong ALife can only be approximated based on the existing roadmaps of futurists. In philosophy, creating artificial life seems consistent with nature and even God, if there is one. At the same time, this implementation may contradict the concept of free will, which is defined as the capacity for an agent to make choices in which the outcome has not been determined by past events. This study has certain limitations, which means that larger focus group and more prepared participants would provide more precise results

    The Moral Status of Whole Brain Emulations

    Get PDF
    First lines of the Introduction (as abstract not provided): Artificial Intelligence is going to radically change our world; the only real question is by how much. A number of prominent figures believe that current AI research might initiate a so-called technological singularity - a period where intelligent machines design even more intelligent machines, setting off an exponentially accelerating cascade of advancement whose end result, a superintelligence, would be “the last invention that man need ever make” (Good 1965). However, even for those who dismiss such singularity talk as hyperbolic sci-fi nonsense, the fact that we’re on the cusp of an AI revolution - and that society is going to look very different once it’s over - seems undeniable. Already AI systems are changing how we eat , how 1 we transport people and goods2, how we diagnose and treat illnesses3, and how we wage war4. They are replacing and outperforming humans in a plethora of tasks, many of which were once thought to require a uniquely human “instinct”5, and their scope of application only looks to be increasing

    Editorial: Risks of general artificial intelligence

    Get PDF
    This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we can expect, what the impact of superintelligent machines might be, how we might design safe and controllable systems, and whether there are directions of research that should best be avoided or strengthened
    corecore