3,388 research outputs found

    The negative way to sentience

    Get PDF
    While the materialist paradigm is credited for the incredible success of science in describing the world, to some scientists and philosophers there seems to be something about subjective experience that is left out, in an apparently irreconcilable way. I show that indeed a scientific description of reality faces a serious limitation, which explains this position. On the other hand, to remain in the realm of science, I explore the problem of sentient experience in an indirect way, through its possible physical correlates. This can only be done in a negative way, which consists in the falsification of various hypotheses and the derivation of no-go results. The general approach I use here is based on simple mathematical proofs about dynamical systems, which I then particularize to several types of physical theories and interpretations of quantum mechanics. Despite choosing this scientifically-prudent approach, it turns out that various possibilities to consider sentience as fundamental make empirical predictions, ranging from some that can only be verified on a subjective basis to some about the physical correlates of sentience, which are independently falsifiable by objective means

    Digital Twins: Potentials, Ethical Issues, and Limitations

    Full text link
    After Big Data and Artificial Intelligence (AI), the subject of Digital Twins has emerged as another promising technology, advocated, built, and sold by various IT companies. The approach aims to produce highly realistic models of real systems. In the case of dynamically changing systems, such digital twins would have a life, i.e. they would change their behaviour over time and, in perspective, take decisions like their real counterparts \textemdash so the vision. In contrast to animated avatars, however, which only imitate the behaviour of real systems, like deep fakes, digital twins aim to be accurate "digital copies", i.e. "duplicates" of reality, which may interact with reality and with their physical counterparts. This chapter explores, what are possible applications and implications, limitations, and threats.Comment: 22 pages, in Andrej Zwitter and Oskar Gstrein, Handbook on the Politics and Governance of Big Data and Artificial Intelligence, Edward Elgar [forthcoming] (Handbooks in Political Science series

    Artificial Superintelligence: Coordination & Strategy

    Get PDF
    Attention in the AI safety community has increasingly started to include strategic considerations of coordination between relevant actors in the field of AI and AI safety, in addition to the steadily growing work on the technical considerations of building safe AI systems. This shift has several reasons: Multiplier effects, pragmatism, and urgency. Given the benefits of coordination between those working towards safe superintelligence, this book surveys promising research in this emerging field regarding AI safety. On a meta-level, the hope is that this book can serve as a map to inform those working in the field of AI coordination about other promising efforts. While this book focuses on AI safety coordination, coordination is important to most other known existential risks (e.g., biotechnology risks), and future, human-made existential risks. Thus, while most coordination strategies in this book are specific to superintelligence, we hope that some insights yield “collateral benefits” for the reduction of other existential risks, by creating an overall civilizational framework that increases robustness, resiliency, and antifragility

    Exploring Artificial Intelligence Bias, Fairness and Ethics in Organisation and Managerial Studies

    Get PDF
    Due to the increasing adoption of AI technology in our society, this paper aims to develop a complete overview of the current debate on artificial intelligence bias fairness and ethics in organisation and managerial studies. To this end, we adopted the Computational Literature Review (CLR) method to conduct an impact and a topic modelling analysis of the relevant literature, using the Latent Dirichlet Allocation (LDA) technique. As a result, we identified and analysed 18 topics related to the selected domain. We further classified those topics into 5 categories creating a clear distinction between the social and the technical nature of a bias and its origins. Finally, focusing on the emerging topics, we proposed a set of guiding questions that might foster future research directions. This paper provides insights to scholars and managers interested in AI bias and ethical issues and could be used also as a guide to perform CLR

    The Mind-Body Problem and Its Solution (Second Edition)

    Get PDF
    Lays out the problem of sentience in a physical world and the solution based on the event ontology of Russell and Whitehead. The second edition includes the construction of physics from arrow diagrams, stemming from the discovery that the arrows of time form frequency ratios, which serve to define energy ratios and the quantum

    Belief Is Not Experience: Transformation as a Tool for Bridging the Ontological Divide in Anthropological Research and Reporting

    Get PDF
    For more than a hundred years, anthropologists have recorded stories of beliefs in other-than-human sentience and consciousness, yet we have most frequently insisted on contextualizing these stories in terms of cultural, epistemological, or ontological relativism. In this paper, I ask why we have had such a hard time taking reports of unseen realms seriously and describe the transformative role of personal experience as a catalyst for change in anthropological research and reporting

    Distributed information extraction from large-scale wireless sensor networks

    Get PDF

    Punishing Artificial Intelligence: Legal Fiction or Science Fiction

    Get PDF
    Whether causing flash crashes in financial markets, purchasing illegal drugs, or running over pedestrians, AI is increasingly engaging in activity that would be criminal for a natural person, or even an artificial person like a corporation. We argue that criminal law falls short in cases where an AI causes certain types of harm and there are no practically or legally identifiable upstream criminal actors. This Article explores potential solutions to this problem, focusing on holding AI directly criminally liable where it is acting autonomously and irreducibly. Conventional wisdom holds that punishing AI is incongruous with basic criminal law principles such as the capacity for culpability and the requirement of a guilty mind. Drawing on analogies to corporate and strict criminal liability, as well as familiar imputation principles, we show how a coherent theoretical case can be constructed for AI punishment. AI punishment could result in general deterrence and expressive benefits, and it need not run afoul of negative limitations such as punishing in excess of culpability. Ultimately, however, punishing AI is not justified, because it might entail significant costs and it would certainly require radical legal changes. Modest changes to existing criminal laws that target persons, together with potentially expanded civil liability, are a better solution to AI crime
    • …
    corecore