701 research outputs found

    A role for consciousness in action selection

    Get PDF

    Intelligence by Design: Principles of Modularity and Coordination for Engineerin

    Get PDF
    All intelligence relies on search --- for example, the search for an intelligent agent's next action. Search is only likely to succeed in resource-bounded agents if they have already been biased towards finding the right answer. In artificial agents, the primary source of bias is engineering. This dissertation describes an approach, Behavior-Oriented Design (BOD) for engineering complex agents. A complex agent is one that must arbitrate between potentially conflicting goals or behaviors. Behavior-oriented design builds on work in behavior-based and hybrid architectures for agents, and the object oriented approach to software engineering. The primary contributions of this dissertation are: 1.The BOD architecture: a modular architecture with each module providing specialized representations to facilitate learning. This includes one pre-specified module and representation for action selection or behavior arbitration. The specialized representation underlying BOD action selection is Parallel-rooted, Ordered, Slip-stack Hierarchical (POSH) reactive plans. 2.The BOD development process: an iterative process that alternately scales the agent's capabilities then optimizes the agent for simplicity, exploiting tradeoffs between the component representations. This ongoing process for controlling complexity not only provides bias for the behaving agent, but also facilitates its maintenance and extendibility. The secondary contributions of this dissertation include two implementations of POSH action selection, a procedure for identifying useful idioms in agent architectures and using them to distribute knowledge across agent paradigms, several examples of applying BOD idioms to established architectures, an analysis and comparison of the attributes and design trends of a large number of agent architectures, a comparison of biological (particularly mammalian) intelligence to artificial agent architectures, a novel model of primate transitive inference, and many other examples of BOD agents and BOD development

    Human Experience and AI Regulation

    Get PDF
    Although nearly all artificial intelligence (AI) regulatory documents now reference the importance of human-centering digital systems, we frequently see AI ethics itself reduced to limited concerns, such as bias and, sometimes, power consumption. Although their impacts on human lives and our ecosystem render both of these absolutely critical, the ethical and regulatory challenges and obligations relating to AI do not stop there. Joseph Weizenbaum described the potential abuse of intelligent systems to make inhuman cruelty and acts of war more emotionally accessible to human operators. But more than this, he highlighted the need to solve the social issues that facilitate violent acts of war, and the immense potential the use of computers offers in this context. The present article reviews how the EU’s digital regulatory legislation—well enforced—could help us address such concerns. I begin by reviewing why the EU leads in this area, considering the legitimacy of its actions both regionally and globally. I then review the legislation already protecting us—the General Data Protection Regulation, the Digital Services Act, and the Digital Markets Act—and consider their role in achieving Weizenbaum’s goals. Finally, I consider the almost-promulgated AI Act before concluding with a brief discussion of the potential for future enforcement and more global regulatory cooperation

    Evolutionary Psychology and Artificial Intelligence:The Impact of Artificial Intelligence on Human Behaviour

    Get PDF
    Artificial Intelligence (AI) presents a new landscape for humanity. Both what we can do, and the impact of our ordinary actions is changed by the innovation of digital and intelligent technology. In this chapter we postulate how AI impacts contemporary societies on an individual and collective level. We begin by teasing apart the current actual impact of AI on society from the impact that our cultural narratives surrounding AI has. We then consider the evolutionary mechanisms that maintain a stable society such as heterogeneity, flexibility and cooperation. Taking AI as a prosthetic intelligence, we discuss how—for better and worse—it enhances our connectivity, coordination, equality, distribution of control and our ability to make predictions. We further give examples of how transparency of thoughts and behaviours influence call-out culture and behavioural manipulation with consideration of group dynamics and tribalism. We next consider the efficacy and vulnerability of human trust, including the contexts in which blind trust in information is either adaptive or maladaptive in an age where the cost of information is decreasing. We then discuss trust in AI, and how we can calibrate trust as to avoid over-trust and mistrust adaptively, using transparency as a mechanism. We then explore the barriers for AI increasing accuracy in our perception by focusing on fake news. Finally, we look at the impact of information accuracy, and the battles of individuals against false beliefs. Where available, we use models drawn from scientific simulations to justify and clarify our predictions and analysis

    The meaning of the EPSRC principles of robotics

    Get PDF
    In revisiting the Principles of Robotics (as we do in this special issue), it is important to carefully consider their full meaning – their history, the intentions behind them, and their actual societal impact to date. Here I address first the meaning of the document as a whole, then of its constituent parts. Further, I describe the nature of policy, and use the Principles as a case study to discuss how government and academia can interact in constructing policy. I defend the Principles and their main themes: that commercially manufactured robots should not be responsible parties under the law, and that users should not be deceived about robots' capacities or moral status. This perspective allows for the incorporation of robots immediately into UK society and law – the objective of the Principles. The Principles were not designed for every conceivable robot, but rather serve in part as design specifications for robots to be incorporated as legal products into British society

    The Past Decade and Future of AI’s Impact on Society

    Get PDF
    Artificial intelligence (AI) is a technical term referring to artifacts used to detect contexts or to effect actions in response to detected contexts. Our capacity to build such artifacts has been increasing, and with it the impact they have on our society. This article first documents the social and economic changes brought about by our use of AI, particularly but not exclusively focusing on the decade since the 2007 advent of smartphones, which contribute substantially to “big data” and therefore the efficacy of machine learning. It then projects from this political, economic, and personal challenges confronting humanity in the near future, including policy recommendations. Overall, AI is not as unusual a technology as expected, but this very lack of expected form may have exposed us to a significantly increased urgency concerning familiar challenges. In particular, the identity and autonomy of both individuals and nations is challenged by the increased accessibility of knowledge

    A role for consciousness in action selection

    Get PDF

    Trust, Communication, and Inequality

    Get PDF
    Inequality in wealth is a pressing concern in many contemporary societies, where it has been show to co-occur with political polarization and policy volatility, however its causes are unclear. Here we demonstrate in a simple model where social behavior spreads through learning that inequality can covary reliably with other cooperative behavior, despite a lack of exogenous cause or deliberate coordination. In the context of simulated cultural evolution selecting for trust and cooperative exchange, we find both cooperation and inequality to be more prevalent in contexts where the same agents play both the roles of the trusting investor and the trusted investee, in contrast to the condition where these roles are divided between disjoint populations. Cooperation is more likely in contexts of high transparency about potential partners and with a high amount of partner choice; while inequality is more likely with high information but no choice in partners for those that want to invest. While not yet a full model of contemporary society, our approach holds promise for examining the causality and social contexts underlying shifts in income inequality
    • …
    corecore