53,345 research outputs found

    A multi-agent environment in robotics

    Get PDF
    The use of Multi-Agent Systems as a Distributed AI paradigm for Robotics is the principal aim of our present work. In this paper we consider the needed concepts and a suitable architecture for a set of Agents in order to make it possible for them to cooperate in solving non-trivial tasks. Agents are sets of different software modules, each one implementing a function required for cooperation. A Monitor, an Acquaintance and Self-knowledge Modules, an Agenda and an Input queue, on the top of each Intelligent System, are fundamental modules that guarantee the process of cooperation, while the overall aim is devoted to the community of cooperative Agents. These Agents, which our testbed concerns, include Vision, Planner, World Model and the Robot itself.info:eu-repo/semantics/publishedVersio

    Can models of agents be transferred between different areas?

    Get PDF
    One of the main reasons for the sustained activity and interest in the field of agent-based systems, apart from the obvious recognition of its value as a natural and intuitive way of understanding the world, is its reach into very many different and distinct fields of investigation. Indeed, the notions of agents and multi-agent systems are relevant to fields ranging from economics to robotics, in contributing to the foundations of the field, being influenced by ongoing research, and in providing many domains of application. While these various disciplines constitute a rich and diverse environment for agent research, the way in which they may have been linked by it is a much less considered issue. The purpose of this panel was to examine just this concern, in the relationships between different areas that have resulted from agent research. Informed by the experience of the participants in the areas of robotics, social simulation, economics, computer science and artificial intelligence, the discussion was lively and sometimes heated

    HoME: a Household Multimodal Environment

    Full text link
    We introduce HoME: a Household Multimodal Environment for artificial agents to learn from vision, audio, semantics, physics, and interaction with objects and other agents, all within a realistic context. HoME integrates over 45,000 diverse 3D house layouts based on the SUNCG dataset, a scale which may facilitate learning, generalization, and transfer. HoME is an open-source, OpenAI Gym-compatible platform extensible to tasks in reinforcement learning, language grounding, sound-based navigation, robotics, multi-agent learning, and more. We hope HoME better enables artificial agents to learn as humans do: in an interactive, multimodal, and richly contextualized setting.Comment: Presented at NIPS 2017's Visually-Grounded Interaction and Language Worksho

    Mean Field Behaviour of Collaborative Multi-Agent Foragers

    Full text link
    Collaborative multi-agent robotic systems where agents coordinate by modifying a shared environment often result in undesired dynamical couplings that complicate the analysis and experiments when solving a specific problem or task. Simultaneously, biologically-inspired robotics rely on simplifying agents and increasing their number to obtain more efficient solutions to such problems, drawing similarities with natural processes. In this work we focus on the problem of a biologically-inspired multi-agent system solving collaborative foraging. We show how mean field techniques can be used to re-formulate such a stochastic multi-agent problem into a deterministic autonomous system. This de-couples agent dynamics, enabling the computation of limit behaviours and the analysis of optimality guarantees. Furthermore, we analyse how having finite number of agents affects the performance when compared to the mean field limit and we discuss the implications of such limit approximations in this multi-agent system, which have impact on more general collaborative stochastic problems

    Enhancing Exploration and Safety in Deep Reinforcement Learning

    Get PDF
    A Deep Reinforcement Learning (DRL) agent tries to learn a policy maximizing a long-term objective by trials and errors in large state spaces. However, this learning paradigm requires a non-trivial amount of interactions in the environment to achieve good performance. Moreover, critical applications, such as robotics, typically involve safety criteria to consider while designing novel DRL solutions. Hence, devising safe learning approaches with efficient exploration is crucial to avoid getting stuck in local optima, failing to learn properly, or causing damages to the surrounding environment. This thesis focuses on developing Deep Reinforcement Learning algorithms to foster efficient exploration and safer behaviors in simulation and real domains of interest, ranging from robotics to multi-agent systems. To this end, we rely both on standard benchmarks, such as SafetyGym, and robotic tasks widely adopted in the literature (e.g., manipulation, navigation). This variety of problems is crucial to assess the statistical significance of our empirical studies and the generalization skills of our approaches. We initially benchmark the sample efficiency versus performance trade-off between value-based and policy-gradient algorithms. This part highlights the benefits of using non-standard simulation environments (i.e., Unity), which also facilitates the development of further optimization for DRL. We also discuss the limitations of standard evaluation metrics (e.g., return) in characterizing the actual behaviors of a policy, proposing the use of Formal Verification (FV) as a practical methodology to evaluate behaviors over desired specifications. The second part introduces Evolutionary Algorithms (EAs) as a gradient-free complimentary optimization strategy. In detail, we combine population-based and gradient-based DRL to diversify exploration and improve performance both in single and multi-agent applications. For the latter, we discuss how prior Multi-Agent (Deep) Reinforcement Learning (MARL) approaches hinder exploration, proposing an architecture that favors cooperation without affecting exploration

    A Versatile Agent for Fast Learning from Human Instructors

    Full text link
    In recent years, a myriad of superlative works on intelligent robotics policies have been done, thanks to advances in machine learning. However, inefficiency and lack of transfer ability hindered algorithms from pragmatic applications, especially in human-robot collaboration, when few-shot fast learning and high flexibility become a wherewithal. To surmount this obstacle, we refer to a "Policy Pool", containing pre-trained skills that can be easily accessed and reused. An agent is employed to govern the "Policy Pool" by unfolding requisite skills in a flexible sequence, contingent on task specific predilection. This predilection can be automatically interpreted from one or few human expert demonstrations. Under this hierarchical setting, our algorithm is able to pick up a sparse-reward, multi-stage knack with only one demonstration in a Mini-Grid environment, showing the potential for instantly mastering complex robotics skills from human instructors. Additionally, the innate quality of our algorithm also allows for lifelong learning, making it a versatile agent
    corecore