756 research outputs found

    A role for consciousness in action selection

    Get PDF

    Intelligence by Design: Principles of Modularity and Coordination for Engineerin

    Get PDF
    All intelligence relies on search --- for example, the search for an intelligent agent's next action. Search is only likely to succeed in resource-bounded agents if they have already been biased towards finding the right answer. In artificial agents, the primary source of bias is engineering. This dissertation describes an approach, Behavior-Oriented Design (BOD) for engineering complex agents. A complex agent is one that must arbitrate between potentially conflicting goals or behaviors. Behavior-oriented design builds on work in behavior-based and hybrid architectures for agents, and the object oriented approach to software engineering. The primary contributions of this dissertation are: 1.The BOD architecture: a modular architecture with each module providing specialized representations to facilitate learning. This includes one pre-specified module and representation for action selection or behavior arbitration. The specialized representation underlying BOD action selection is Parallel-rooted, Ordered, Slip-stack Hierarchical (POSH) reactive plans. 2.The BOD development process: an iterative process that alternately scales the agent's capabilities then optimizes the agent for simplicity, exploiting tradeoffs between the component representations. This ongoing process for controlling complexity not only provides bias for the behaving agent, but also facilitates its maintenance and extendibility. The secondary contributions of this dissertation include two implementations of POSH action selection, a procedure for identifying useful idioms in agent architectures and using them to distribute knowledge across agent paradigms, several examples of applying BOD idioms to established architectures, an analysis and comparison of the attributes and design trends of a large number of agent architectures, a comparison of biological (particularly mammalian) intelligence to artificial agent architectures, a novel model of primate transitive inference, and many other examples of BOD agents and BOD development

    Human Experience and AI Regulation

    Get PDF
    Although nearly all artificial intelligence (AI) regulatory documents now reference the importance of human-centering digital systems, we frequently see AI ethics itself reduced to limited concerns, such as bias and, sometimes, power consumption. Although their impacts on human lives and our ecosystem render both of these absolutely critical, the ethical and regulatory challenges and obligations relating to AI do not stop there. Joseph Weizenbaum described the potential abuse of intelligent systems to make inhuman cruelty and acts of war more emotionally accessible to human operators. But more than this, he highlighted the need to solve the social issues that facilitate violent acts of war, and the immense potential the use of computers offers in this context. The present article reviews how the EU’s digital regulatory legislation—well enforced—could help us address such concerns. I begin by reviewing why the EU leads in this area, considering the legitimacy of its actions both regionally and globally. I then review the legislation already protecting us—the General Data Protection Regulation, the Digital Services Act, and the Digital Markets Act—and consider their role in achieving Weizenbaum’s goals. Finally, I consider the almost-promulgated AI Act before concluding with a brief discussion of the potential for future enforcement and more global regulatory cooperation

    Evolutionary Psychology and Artificial Intelligence:The Impact of Artificial Intelligence on Human Behaviour

    Get PDF
    Artificial Intelligence (AI) presents a new landscape for humanity. Both what we can do, and the impact of our ordinary actions is changed by the innovation of digital and intelligent technology. In this chapter we postulate how AI impacts contemporary societies on an individual and collective level. We begin by teasing apart the current actual impact of AI on society from the impact that our cultural narratives surrounding AI has. We then consider the evolutionary mechanisms that maintain a stable society such as heterogeneity, flexibility and cooperation. Taking AI as a prosthetic intelligence, we discuss how—for better and worse—it enhances our connectivity, coordination, equality, distribution of control and our ability to make predictions. We further give examples of how transparency of thoughts and behaviours influence call-out culture and behavioural manipulation with consideration of group dynamics and tribalism. We next consider the efficacy and vulnerability of human trust, including the contexts in which blind trust in information is either adaptive or maladaptive in an age where the cost of information is decreasing. We then discuss trust in AI, and how we can calibrate trust as to avoid over-trust and mistrust adaptively, using transparency as a mechanism. We then explore the barriers for AI increasing accuracy in our perception by focusing on fake news. Finally, we look at the impact of information accuracy, and the battles of individuals against false beliefs. Where available, we use models drawn from scientific simulations to justify and clarify our predictions and analysis

    The meaning of the EPSRC principles of robotics

    Get PDF
    In revisiting the Principles of Robotics (as we do in this special issue), it is important to carefully consider their full meaning – their history, the intentions behind them, and their actual societal impact to date. Here I address first the meaning of the document as a whole, then of its constituent parts. Further, I describe the nature of policy, and use the Principles as a case study to discuss how government and academia can interact in constructing policy. I defend the Principles and their main themes: that commercially manufactured robots should not be responsible parties under the law, and that users should not be deceived about robots' capacities or moral status. This perspective allows for the incorporation of robots immediately into UK society and law – the objective of the Principles. The Principles were not designed for every conceivable robot, but rather serve in part as design specifications for robots to be incorporated as legal products into British society

    A role for consciousness in action selection

    Get PDF
    This paper argues that conscious attention exists not so much for selecting an immediate action as for focusing learning of the action-selection mechanisms and predictive models on tasks and environmental contingencies likely to affect the conscious agent. It is perfectly possible to build this sort of system into machine intelligence, but it is not strictly necessary unless the intelligence needs to learn and is resource-bounded with respect to the rate of learning vs. the rate of relevant environmental change. Support of this theory is drawn from scientific research and AI simulations, and a few consequences are suggested with respect to self consciousness and ethical obligations to and for AI

    The Past Decade and Future of AI’s Impact on Society

    Get PDF
    Artificial intelligence (AI) is a technical term referring to artifacts used to detect contexts or to effect actions in response to detected contexts. Our capacity to build such artifacts has been increasing, and with it the impact they have on our society. This article first documents the social and economic changes brought about by our use of AI, particularly but not exclusively focusing on the decade since the 2007 advent of smartphones, which contribute substantially to “big data” and therefore the efficacy of machine learning. It then projects from this political, economic, and personal challenges confronting humanity in the near future, including policy recommendations. Overall, AI is not as unusual a technology as expected, but this very lack of expected form may have exposed us to a significantly increased urgency concerning familiar challenges. In particular, the identity and autonomy of both individuals and nations is challenged by the increased accessibility of knowledge

    Improving robot transparency:real-time visualisation of robot AI substantially improves understanding in naive observers

    Get PDF
    Deciphering the behaviour of intelligent others is a fundamental characteristic of our own intelligence. As we interact with complex intelligent artefacts, humans inevitably construct mental models to understand and predict their behaviour. If these models are incorrect or inadequate, we run the risk of self deception or even harm. Here we demonstrate that providing even a simple, abstracted real-time visualisation of a robot’s AI can radically improve the transparency of machine cognition. Findings from both an online experiment using a video recording of a robot, and from direct observation of a robot show substantial improvements in observers’ understanding of the robot’s behaviour. Unexpectedly, this improved understanding was correlated in one condition withan increased perception that the robot was ‘thinking’, but in no conditions was the robot’s assessed intelligence impacted. In addition to our results, we describe our approach, tools used, implications, and potential future research directions
    • 

    corecore