204 research outputs found

    Human-Machine Teamwork: An Exploration of Multi-Agent Systems, Team Cognition, and Collective Intelligence

    Get PDF
    One of the major ways through which humans overcome complex challenges is teamwork. When humans share knowledge and information, and cooperate and coordinate towards shared goals, they overcome their individual limitations and achieve better solutions to difficult problems. The rise of artificial intelligence provides a unique opportunity to study teamwork between humans and machines, and potentially discover insights about cognition and collaboration that can set the foundation for a world where humans work with, as opposed to against, artificial intelligence to solve problems that neither human or artificial intelligence can solve on its own. To better understand human-machine teamwork, it’s important to understand human-human teamwork (humans working together) and multi-agent systems (how artificial intelligence interacts as an agent that’s part of a group) to identify the characteristics that make humans and machines good teammates. This perspective lets us approach human-machine teamwork from the perspective of the human as well as the perspective of the machine. Thus, to reach a more accurate understanding of how humans and machines can work together, we examine human-machine teamwork through a series of studies. In this dissertation, we conducted 4 studies and developed 2 theoretical models: First, we focused on human-machine cooperation. We paired human participants with reinforcement learning agents to play two game theory scenarios where individual interests and collective interests are in conflict to easily detect cooperation. We show that different reinforcement models exhibit different levels of cooperation, and that humans are more likely to cooperate if they believe they are playing with another human as opposed to a machine. Second, we focused on human-machine coordination. We once again paired humans with machines to create a human-machine team to make them play a game theory scenario that emphasizes convergence towards a mutually beneficial outcome. We also analyzed survey responses from the participants to highlight how many of the principles of human-human teamwork can still occur in human-machine teams even though communication is not possible. Third, we reviewed the collective intelligence literature and the prediction markets literature to develop a model for a prediction market that enables humans and machines to work together to improve predictions. The model supports artificial intelligence operating as a peer in the prediction market as well as a complementary aggregator. Fourth, we reviewed the team cognition and collective intelligence literature to develop a model for teamwork that integrates team cognition, collective intelligence, and artificial intelligence. The model provides a new foundation to think about teamwork beyond the forecasting domain. Next, we used a simulation of emergency response management to test the different teamwork aspects of a variety of human-machine teams compared to human-human and machine-machine teams. Lastly, we ran another study that used a prediction market to examine the impact that having AI operate as a participant rather than an aggregator has on the predictive capacity of the prediction market. Our research will help identify which principles of human teamwork are applicable to human-machine teamwork, the role artificial intelligence can play in enhancing collective intelligence, and the effectiveness of human-machine teamwork compared to single artificial intelligence. In the process, we expect to produce a substantial amount of empirical results that can lay the groundwork for future research of human-machine teamwork

    An Overview of Catastrophic AI Risks

    Full text link
    Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes

    Computational Theory of Mind for Human-Agent Coordination

    Get PDF
    In everyday life, people often depend on their theory of mind, i.e., their ability to reason about unobservable mental content of others to understand, explain, and predict their behaviour. Many agent-based models have been designed to develop computational theory of mind and analyze its effectiveness in various tasks and settings. However, most existing models are not generic (e.g., only applied in a given setting), not feasible (e.g., require too much information to be processed), or not human-inspired (e.g., do not capture the behavioral heuristics of humans). This hinders their applicability in many settings. Accordingly, we propose a new computational theory of mind, which captures the human decision heuristics of reasoning by abstracting individual beliefs about others. We specifically study computational affinity and show how it can be used in tandem with theory of mind reasoning when designing agent models for human-agent negotiation. We perform two-agent simulations to analyze the role of affinity in getting to agreements when there is a bound on the time to be spent for negotiating. Our results suggest that modeling affinity can ease the negotiation process by decreasing the number of rounds needed for an agreement as well as yield a higher benefit for agents with theory of mind reasoning.</p

    Strategic decision-making in multi-agent markets: The emergence of endogenous crises and volatility

    Get PDF
    Traditional economic frameworks are built upon perfectly rational agents and equilibrium outcomes. However, during times of crises, these frameworks prove insufficient. In this thesis, we take an alternative perspective based on "Complexity Economics", relaxing the assumption of perfectly rational agents and allowing for out-of-equilibrium dynamics. While many contemporary approaches explain crises and non-equilibrium market phenomena as the rational reaction to external news, the emergence of endogenous crises remains an open question. We begin addressing this question by demonstrating how a multi-agent model of heterogeneous boundedly rational agents acting according to heuristics can reproduce and forecast key non-linear price movements in the Australian housing market, during boom and bust cycles. In order to provide foundations for such heuristic-based reasoning, we then propose a novel information-theoretic approach, Quantal Hierarchy, for modelling limitations in strategic reasoning, demonstrating how this convincingly and generically captures the decision-making of interacting agents in competitive markets outperforming existing approaches. In addition, we demonstrate how a concise generalised market model can generate important stylised facts, such as fat-tails and volatility clustering, and allow for the emergence of crises, purely endogenously. This thesis provides support to the interacting agent hypothesis, addressing a crucial question of whether crisis emergence and various stylised facts can be seen as endogenous phenomena, and provides a generic method for representing strategic agent reasoning

    The State of AI Ethics Report (June 2020)

    Get PDF
    These past few months have been especially challenging, and the deployment of technology in ways hitherto untested at an unrivalled pace has left the internet and technology watchers aghast. Artificial intelligence has become the byword for technological progress and is being used in everything from helping us combat the COVID-19 pandemic to nudging our attention in different directions as we all spend increasingly larger amounts of time online. It has never been more important that we keep a sharp eye out on the development of this field and how it is shaping our society and interactions with each other. With this inaugural edition of the State of AI Ethics we hope to bring forward the most important developments that caught our attention at the Montreal AI Ethics Institute this past quarter. Our goal is to help you navigate this ever-evolving field swiftly and allow you and your organization to make informed decisions. This pulse-check for the state of discourse, research, and development is geared towards researchers and practitioners alike who are making decisions on behalf of their organizations in considering the societal impacts of AI-enabled solutions. We cover a wide set of areas in this report spanning Agency and Responsibility, Security and Risk, Disinformation, Jobs and Labor, the Future of AI Ethics, and more. Our staff has worked tirelessly over the past quarter surfacing signal from the noise so that you are equipped with the right tools and knowledge to confidently tread this complex yet consequential domain

    Intelligent Transportation Related Complex Systems and Sensors

    Get PDF
    Building around innovative services related to different modes of transport and traffic management, intelligent transport systems (ITS) are being widely adopted worldwide to improve the efficiency and safety of the transportation system. They enable users to be better informed and make safer, more coordinated, and smarter decisions on the use of transport networks. Current ITSs are complex systems, made up of several components/sub-systems characterized by time-dependent interactions among themselves. Some examples of these transportation-related complex systems include: road traffic sensors, autonomous/automated cars, smart cities, smart sensors, virtual sensors, traffic control systems, smart roads, logistics systems, smart mobility systems, and many others that are emerging from niche areas. The efficient operation of these complex systems requires: i) efficient solutions to the issues of sensors/actuators used to capture and control the physical parameters of these systems, as well as the quality of data collected from these systems; ii) tackling complexities using simulations and analytical modelling techniques; and iii) applying optimization techniques to improve the performance of these systems. It includes twenty-four papers, which cover scientific concepts, frameworks, architectures and various other ideas on analytics, trends and applications of transportation-related data

    On the Combination of Game-Theoretic Learning and Multi Model Adaptive Filters

    Get PDF
    This paper casts coordination of a team of robots within the framework of game theoretic learning algorithms. In particular a novel variant of fictitious play is proposed, by considering multi-model adaptive filters as a method to estimate other players’ strategies. The proposed algorithm can be used as a coordination mechanism between players when they should take decisions under uncertainty. Each player chooses an action after taking into account the actions of the other players and also the uncertainty. Uncertainty can occur either in terms of noisy observations or various types of other players. In addition, in contrast to other game-theoretic and heuristic algorithms for distributed optimisation, it is not necessary to find the optimal parameters a priori. Various parameter values can be used initially as inputs to different models. Therefore, the resulting decisions will be aggregate results of all the parameter values. Simulations are used to test the performance of the proposed methodology against other game-theoretic learning algorithms.</p

    LIPIcs, Volume 277, GIScience 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 277, GIScience 2023, Complete Volum
    • …
    corecore