1,841 research outputs found

    Multi-type Fair Resource Allocation for Distributed Multi-Robot Systems

    Get PDF
    Fair resource allocation is essential to ensure that all resource requesters acquire adequate resources and accomplish tasks. We propose solutions to the fairness problem in multi-type resource allocation for multi-robot systems that have multiple resource requesters. We apply the dominant resource fairness (DRF) principle in our solutions to two different systems: single-tasking robots with multi-robot tasks (STR-MRT) and multi-tasking robots with single-robot tasks (MTR-SRT). In STR-MRT, each robot can perform only one task at a time, tasks are divisible, and accomplishing each task requires one or more robots. In MTR-SRT, each robot can perform multiple tasks at a time, tasks are not divisible, and accomplishing each task requires only one robot. We present centralized solutions to the fairness problem in STR-MRT. Meanwhile, we model the decentralized resource allocation in STR-MRT as a coordination game between the robots. Each robot subgroup is formed by robots that strategically select the same resource requester. For a requester associated with a specific subgroup, a consensus-based team formation algorithm further chooses the minimal set of robots to accomplish the task. We leverage the Deep Q-learning Network (DQN) to support requester selection. The results suggest that the DQN outperforms the commonly used Q-learning. Finally, we propose two decentralized solutions to promote fair resource allocation in MTR-SRT, as a centralized solution already exists. We first propose a task-forwarding solution in which the robots need to negotiate the placement of each task. In our second solution, each robot first selects resource requesters and then independently allocates resources to tasks that arrive from the selected requesters. The resource-requester selection phase of the latter solution models a coordination game that is solved by reinforcement learning. The experimental results suggest that both approaches outperform their baselines

    UNDERSTANDING HUMAN-AI AUGMENTATION IN BUSINESS AND MANAGEMENT CONTEXT: A LITERATURE REVIEW

    Get PDF
    The relationship between human and artificial intelligence has attracted debates and polarized views. A key area of this debate that received research attention is human and AI capability to augment each other to achieve better outcomes. While there is a growing research interest in the topic, research is currently dispersed and spread across the management disciplines making it hard for researchers to benefit from an accumulated knowledge in this domain. This study synthesizes the literature and describes the current research findings in order to provide foundation for future research in this area. Based on a systematic review, we identify and discuss three emerging themes in the literature and highlight different possible challenges related to integrating AI in organisations. A future research agenda is also presented

    Cooperative AI via Decentralized Commitment Devices

    Full text link
    Credible commitment devices have been a popular approach for robust multi-agent coordination. However, existing commitment mechanisms face limitations like privacy, integrity, and susceptibility to mediator or user strategic behavior. It is unclear if the cooperative AI techniques we study are robust to real-world incentives and attack vectors. However, decentralized commitment devices that utilize cryptography have been deployed in the wild, and numerous studies have shown their ability to coordinate algorithmic agents facing adversarial opponents with significant economic incentives, currently in the order of several million to billions of dollars. In this paper, we use examples in the decentralization and, in particular, Maximal Extractable Value (MEV) (arXiv:1904.05234) literature to illustrate the potential security issues in cooperative AI. We call for expanded research into decentralized commitments to advance cooperative AI capabilities for secure coordination in open environments and empirical testing frameworks to evaluate multi-agent coordination ability given real-world commitment constraints.Comment: NeurIPS 2023- Multi-Agent Security Worksho

    Multiscale computation and dynamic attention in biological and artificial intelligence

    Get PDF
    Biological and artificial intelligence (AI) are often defined by their capacity to achieve a hierarchy of short-term and long-term goals that require incorporating information over time and space at both local and global scales. More advanced forms of this capacity involve the adaptive modulation of integration across scales, which resolve computational inefficiency and explore-exploit dilemmas at the same time. Research in neuroscience and AI have both made progress towards understanding architectures that achieve this. Insight into biological computations come from phenomena such as decision inertia, habit formation, information search, risky choices and foraging. Across these domains, the brain is equipped with mechanisms (such as the dorsal anterior cingulate and dorsolateral prefrontal cortex) that can represent and modulate across scales, both with top-down control processes and by local to global consolidation as information progresses from sensory to prefrontal areas. Paralleling these biological architectures, progress in AI is marked by innovations in dynamic multiscale modulation, moving from recurrent and convolutional neural networks—with fixed scalings—to attention, transformers, dynamic convolutions, and consciousness priors—which modulate scale to input and increase scale breadth. The use and development of these multiscale innovations in robotic agents, game AI, and natural language processing (NLP) are pushing the boundaries of AI achievements. By juxtaposing biological and artificial intelligence, the present work underscores the critical importance of multiscale processing to general intelligence, as well as highlighting innovations and differences between the future of biological and artificial intelligence
    • …
    corecore