153 research outputs found

    Art as we don't know it

    Get PDF
    2018 marked the 10th anniversary of the Bioart Society and created the impetus for the publication of Art as We Don’t Know It. For this publication, the Bioart Society joined forces with the School of Arts, Design and Architecture of the Aalto University. The close history and ongoing collaborative relationship between the Bioart Society and Biofilia – Base for Biological Arts in the Aalto University lead to this mutual effort to celebrate together a diverse and nurturing environment to foster artistic practices on the intersection of art, science and society. Rather than stage a retrospective, we decided to invite writings that look forward and invite speculations about the potential directions of bioarts. The contributions range from peer-reviewed articles to personal accounts and inter-views, interspersed with artistic contributions and Bioart Society projects. The selection offers a purview of the rich variety, both in content and form, of the work currently being made within the field of bioart. The works and articles clearly trouble the porous and provisional definitions of what might be understood as bioart, and indeed definitions of bioart have been usefully and generativity critiqued since the inception of the term. Whilst far from being definitive, we consider the contributions of the book to be tantalising and valuable indicators of trends, visions and impulses. We also invite into the reading of this publication a consideration of potential obsolescences knowing that some of today’s writing will become archaic over time as technologies driven by contemporary excitement and hype are discarded. In so doing we also acknowledge and ponder upon our situatedness and the partialness of our purview in how we begin and find points of departure from which to anticipate the unanticipated. Whilst declining the view of retrospection this book does present art and research that has grown and flourished within the wider network of both the Bioart Society and Biofilia during the previous decade. The book is structured into four thematic sections Life As We Don’t Know It, Convergences, Learnings/Unlearnings, Redraw and Refigure and rounded off with a glossary

    Technologies on the stand:Legal and ethical questions in neuroscience and robotics

    Get PDF

    Metaphors, Myths and the Stories We Tell: How to Empower a Flourishing AI Enabled Human in the Future of Work by Enabling Whole Brain Thinking

    Get PDF
    Through the use of storytelling, literature review, interviews, workshops, and explorations using scenario planning, how to empower an AI enabled human being to flourish in the future of work by enabling Whole Brain Thinking is studied. The purpose of this report is to provide a roadmap for human success using the future of work as a focus. This report reaches five conclusions: 1. Training creativity is the key to building the capability to imagine new metaphors and myths in order to tell new stories to restore Ontological Safety. 2. Whole Braining Thinking is enabled by creativity. As people are able to ignite both left and right brain thinking to see other possibilities, training Whole Brain Thinking helps people to create new metaphors and stories about their future by shifting their mindset to imagine a future that is not dystopian. 3. As the nature of work changes and AI takes over more left brain tasks, Whole Brain Thinking as a skill set will place us in a position to be able to find meaningful employment alongside AI by creating new types of integrated careers, like Explainers. 4. Statisticians use AI for making predictions. If as predicted, Quantum Computing can enhance this capability by examining trends and predicting what is probably, then there is a place for people to use Whole Brain Thinking to expand predictions into the realm of the plausible and the possible outcomes. 5. Being AI Enabled requires comprehension of how AI works by breaking AI into its system components. Being Whole Brain Thinkers allow us to symphonically explain the ‘why’ and how things are linked

    Artificial Intelligence (AI) or Intelligence Augmentation (IA): What is the future?

    Get PDF
    Artificial intelligence (AI) is a rapidly growing technological phenomenon that all industries wish to exploit to benefit from efficiency gains and cost reductions. At the macrolevel, AI appears to be capable of replacing humans by undertaking intelligent tasks that were once limited to the human mind. However, another school of thought suggests that instead of being a replacement for the human mind, AI can be used for intelligence augmentation (IA). Accordingly, our research seeks to address these different views, their implications, and potential risks in an age of increased artificial awareness. We show that the ultimate goal of humankind is to achieve IA through the exploitation of AI. Moreover, we articulate the urgent need for ethical frameworks that define how AI should be used to trigger the next level of IA

    Artificial Intelligence (AI) or Intelligence Augmentation (IA): What Is the Future?

    Full text link
    Artificial intelligence (AI) is a rapidly growing technological phenomenon that all industries wish to exploit to benefit from efficiency gains and cost reductions. At the macrolevel, AI appears to be capable of replacing humans by undertaking intelligent tasks that were once limited to the human mind. However, another school of thought suggests that instead of being a replacement for the human mind, AI can be used for intelligence augmentation (IA). Accordingly, our research seeks to address these different views, their implications, and potential risks in an age of increased artificial awareness. We show that the ultimate goal of humankind is to achieve IA through the exploitation of AI. Moreover, we articulate the urgent need for ethical frameworks that define how AI should be used to trigger the next level of I

    Seven HCI Grand Challenges

    Get PDF
    This article aims to investigate the Grand Challenges which arise in the current and emerging landscape of rapid technological evolution towards more intelligent interactive technologies, coupled with increased and widened societal needs, as well as individual and collective expectations that HCI, as a discipline, is called upon to address. A perspective oriented to humane and social values is adopted, formulating the challenges in terms of the impact of emerging intelligent interactive technologies on human life both at the individual and societal levels. Seven Grand Challenges are identified and presented in this article: Human-Technology Symbiosis; Human-Environment Interactions; Ethics, Privacy and Security; Well-being, Health and Eudaimonia; Accessibility and Universal Access; Learning and Creativity; and Social Organization and Democracy. Although not exhaustive, they summarize the views and research priorities of an international interdisciplinary group of experts, reflecting different scientific perspectives, methodological approaches and application domains. Each identified Grand Challenge is analyzed in terms of: concept and problem definition; main research issues involved and state of the art; and associated emerging requirements

    Public Perception of Android Robots:Indications from an Analysis of YouTube Comments

    Get PDF

    Intent-aligned AI systems deplete human agency: the need for agency foundations research in AI safety

    Full text link
    The rapid advancement of artificial intelligence (AI) systems suggests that artificial general intelligence (AGI) systems may soon arrive. Many researchers are concerned that AIs and AGIs will harm humans via intentional misuse (AI-misuse) or through accidents (AI-accidents). In respect of AI-accidents, there is an increasing effort focused on developing algorithms and paradigms that ensure AI systems are aligned to what humans intend, e.g. AI systems that yield actions or recommendations that humans might judge as consistent with their intentions and goals. Here we argue that alignment to human intent is insufficient for safe AI systems and that preservation of long-term agency of humans may be a more robust standard, and one that needs to be separated explicitly and a priori during optimization. We argue that AI systems can reshape human intention and discuss the lack of biological and psychological mechanisms that protect humans from loss of agency. We provide the first formal definition of agency-preserving AI-human interactions which focuses on forward-looking agency evaluations and argue that AI systems - not humans - must be increasingly tasked with making these evaluations. We show how agency loss can occur in simple environments containing embedded agents that use temporal-difference learning to make action recommendations. Finally, we propose a new area of research called "agency foundations" and pose four initial topics designed to improve our understanding of agency in AI-human interactions: benevolent game theory, algorithmic foundations of human rights, mechanistic interpretability of agency representation in neural-networks and reinforcement learning from internal states
    corecore