4,064 research outputs found

    Building Ethically Bounded AI

    Full text link
    The more AI agents are deployed in scenarios with possibly unexpected situations, the more they need to be flexible, adaptive, and creative in achieving the goal we have given them. Thus, a certain level of freedom to choose the best path to the goal is inherent in making AI robust and flexible enough. At the same time, however, the pervasive deployment of AI in our life, whether AI is autonomous or collaborating with humans, raises several ethical challenges. AI agents should be aware and follow appropriate ethical principles and should thus exhibit properties such as fairness or other virtues. These ethical principles should define the boundaries of AI's freedom and creativity. However, it is still a challenge to understand how to specify and reason with ethical boundaries in AI agents and how to combine them appropriately with subjective preferences and goal specifications. Some initial attempts employ either a data-driven example-based approach for both, or a symbolic rule-based approach for both. We envision a modular approach where any AI technique can be used for any of these essential ingredients in decision making or decision support systems, paired with a contextual approach to define their combination and relative weight. In a world where neither humans nor AI systems work in isolation, but are tightly interconnected, e.g., the Internet of Things, we also envision a compositional approach to building ethically bounded AI, where the ethical properties of each component can be fruitfully exploited to derive those of the overall system. In this paper we define and motivate the notion of ethically-bounded AI, we describe two concrete examples, and we outline some outstanding challenges.Comment: Published at AAAI Blue Sky Track, winner of Blue Sky Awar

    PCSIM: A Parallel Simulation Environment for Neural Circuits Fully Integrated with Python

    Get PDF
    The Parallel Circuit SIMulator (PCSIM) is a software package for simulation of neural circuits. It is primarily designed for distributed simulation of large scale networks of spiking point neurons. Although its computational core is written in C++, PCSIM's primary interface is implemented in the Python programming language, which is a powerful programming environment and allows the user to easily integrate the neural circuit simulator with data analysis and visualization tools to manage the full neural modeling life cycle. The main focus of this paper is to describe PCSIM's full integration into Python and the benefits thereof. In particular we will investigate how the automatically generated bidirectional interface and PCSIM's object-oriented modular framework enable the user to adopt a hybrid modeling approach: using and extending PCSIM's functionality either employing pure Python or C++ and thus combining the advantages of both worlds. Furthermore, we describe several supplementary PCSIM packages written in pure Python and tailored towards setting up and analyzing neural simulations

    Redundant neural vision systems: competing for collision recognition roles

    Get PDF
    Ability to detect collisions is vital for future robots that interact with humans in complex visual environments. Lobula giant movement detectors (LGMD) and directional selective neurons (DSNs) are two types of identified neurons found in the visual pathways of insects such as locusts. Recent modelling studies showed that the LGMD or grouped DSNs could each be tuned for collision recognition. In both biological and artificial vision systems, however, which one should play the collision recognition role and the way the two types of specialized visual neurons could be functioning together are not clear. In this modeling study, we compared the competence of the LGMD and the DSNs, and also investigate the cooperation of the two neural vision systems for collision recognition via artificial evolution. We implemented three types of collision recognition neural subsystems – the LGMD, the DSNs and a hybrid system which combines the LGMD and the DSNs subsystems together, in each individual agent. A switch gene determines which of the three redundant neural subsystems plays the collision recognition role. We found that, in both robotics and driving environments, the LGMD was able to build up its ability for collision recognition quickly and robustly therefore reducing the chance of other types of neural networks to play the same role. The results suggest that the LGMD neural network could be the ideal model to be realized in hardware for collision recognition

    Virtual Partner Interaction (VPI): Exploring Novel Behaviors via Coordination Dynamics

    Get PDF
    Inspired by the dynamic clamp of cellular neuroscience, this paper introduces VPI—Virtual Partner Interaction—a coupled dynamical system for studying real time interaction between a human and a machine. In this proof of concept study, human subjects coordinate hand movements with a virtual partner, an avatar of a hand whose movements are driven by a computerized version of the Haken-Kelso-Bunz (HKB) equations that have been shown to govern basic forms of human coordination. As a surrogate system for human social coordination, VPI allows one to examine regions of the parameter space not typically explored during live interactions. A number of novel behaviors never previously observed are uncovered and accounted for. Having its basis in an empirically derived theory of human coordination, VPI offers a principled approach to human-machine interaction and opens up new ways to understand how humans interact with human-like machines including identification of underlying neural mechanisms

    Artificial intelligence in nanotechnology

    Full text link
    This is the author’s version of a work that was accepted for publication in Nanotechnology. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Nanotechnology 24.45 (2013): 452002During the last decade there has been an increasing use of artificial intelligence tools in nanotechnology research. In this paper we review some of these efforts in the context of interpreting scanning probe microscopy, the study of biological nanosystems, the classification of material properties at the nanoscale, theoretical approaches and simulations in nanoscience, and generally in the design of nanodevices. Current trends and future perspectives in the development of nanocomputing hardware that can boost artificial intelligence based applications are also discussed. Convergence between artificial intelligence and nanotechnology can shape the path for many technological developments in the field of information sciences that will rely on new computer architectures and data representations, hybrid technologies that use biological entities and nanotechnological devices, bioengineering, neuroscience and a large variety of related disciplines

    Rapid Visual Categorization is not Guided by Early Salience-Based Selection

    Full text link
    The current dominant visual processing paradigm in both human and machine research is the feedforward, layered hierarchy of neural-like processing elements. Within this paradigm, visual saliency is seen by many to have a specific role, namely that of early selection. Early selection is thought to enable very fast visual performance by limiting processing to only the most salient candidate portions of an image. This strategy has led to a plethora of saliency algorithms that have indeed improved processing time efficiency in machine algorithms, which in turn have strengthened the suggestion that human vision also employs a similar early selection strategy. However, at least one set of critical tests of this idea has never been performed with respect to the role of early selection in human vision. How would the best of the current saliency models perform on the stimuli used by experimentalists who first provided evidence for this visual processing paradigm? Would the algorithms really provide correct candidate sub-images to enable fast categorization on those same images? Do humans really need this early selection for their impressive performance? Here, we report on a new series of tests of these questions whose results suggest that it is quite unlikely that such an early selection process has any role in human rapid visual categorization.Comment: 22 pages, 9 figure

    The Compass, Issue 7

    Get PDF
    corecore