190,929 research outputs found
Applications of agent architectures to decision support in distributed simulation and training systems
This work develops the approach and presents the results of a new model for applying intelligent agents to complex distributed interactive simulation for command and control. In the framework of tactical command, control communications, computers and intelligence (C4I), software agents provide a novel approach for efficient decision support and distributed interactive mission training. An agent-based architecture for decision support is designed, implemented and is applied in a distributed interactive simulation to significantly enhance the command and control training during simulated exercises. The architecture is based on monitoring, evaluation, and advice agents, which cooperate to provide alternatives to the dec ision-maker in a time and resource constrained environment. The architecture is implemented and tested within the context of an AWACS Weapons Director trainer tool.
The foundation of the work required a wide range of preliminary research topics to be covered, including real-time systems, resource allocation, agent-based computing, decision support systems, and distributed interactive simulations. The major contribution of our work is the construction of a multi-agent architecture and its application to an operational decision support system for command and control interactive simulation. The architectural design for the multi-agent system was drafted in the first stage of the work. In the next stage rules of engagement, objective and cost functions were determined in the AWACS (Airforce command and control) decision support domain. Finally, the multi-agent architecture was implemented and evaluated inside a distributed interactive simulation test-bed for AWACS Vv\u27Ds. The evaluation process combined individual and team use of the decision support system to improve the performance results of WD trainees.
The decision support system is designed and implemented a distributed architecture for performance-oriented management of software agents. The approach provides new agent interaction protocols and utilizes agent performance monitoring and remote synchronization mechanisms. This multi-agent architecture enables direct and indirect agent communication as well as dynamic hierarchical agent coordination. Inter-agent communications use predefined interfaces, protocols, and open channels with specified ontology and semantics. Services can be requested and responses with results received over such communication modes. Both traditional (functional) parameters and nonfunctional (e.g. QoS, deadline, etc.) requirements and captured in service requests
Safe, Remote-Access Swarm Robotics Research on the Robotarium
This paper describes the development of the Robotarium -- a remotely
accessible, multi-robot research facility. The impetus behind the Robotarium is
that multi-robot testbeds constitute an integral and essential part of the
multi-agent research cycle, yet they are expensive, complex, and time-consuming
to develop, operate, and maintain. These resource constraints, in turn, limit
access for large groups of researchers and students, which is what the
Robotarium is remedying by providing users with remote access to a
state-of-the-art multi-robot test facility. This paper details the design and
operation of the Robotarium as well as connects these to the particular
considerations one must take when making complex hardware remotely accessible.
In particular, safety must be built in already at the design phase without
overly constraining which coordinated control programs the users can upload and
execute, which calls for minimally invasive safety routines with provable
performance guarantees.Comment: 13 pages, 7 figures, 3 code samples, 72 reference
A middleware for a large array of cameras
Large arrays of cameras are increasingly being employed for producing high quality image sequences needed for motion analysis research. This leads to the logistical problem with coordination and control of a large number of cameras. In this paper, we used a lightweight multi-agent system for coordinating such camera arrays. The agent framework provides more than a remote sensor access API. It allows reconfigurable and transparent access to cameras, as well as software agents capable of intelligent processing. Furthermore, it eases maintenance by encouraging code reuse. Additionally, our agent system includes an automatic discovery mechanism at startup, and multiple language bindings. Performance tests showed the lightweight nature of the framework while validating its correctness and scalability. Two different camera agents were implemented to provide access to a large array of distributed cameras. Correct operation of these camera agents was confirmed via several image processing agents
Mixed reality participants in smart meeting rooms and smart home enviroments
Human–computer interaction requires modeling of the user. A user profile typically contains preferences, interests, characteristics, and interaction behavior. However, in its multimodal interaction with a smart environment the user displays characteristics that show how the user, not necessarily consciously, verbally and nonverbally provides the smart environment with useful input and feedback. Especially in ambient intelligence environments we encounter situations where the environment supports interaction between the environment, smart objects (e.g., mobile robots, smart furniture) and human participants in the environment. Therefore it is useful for the profile to contain a physical representation of the user obtained by multi-modal capturing techniques. We discuss the modeling and simulation of interacting participants in a virtual meeting room, we discuss how remote meeting participants can take part in meeting activities and they have some observations on translating research results to smart home environments
Resource-aware IoT Control: Saving Communication through Predictive Triggering
The Internet of Things (IoT) interconnects multiple physical devices in
large-scale networks. When the 'things' coordinate decisions and act
collectively on shared information, feedback is introduced between them.
Multiple feedback loops are thus closed over a shared, general-purpose network.
Traditional feedback control is unsuitable for design of IoT control because it
relies on high-rate periodic communication and is ignorant of the shared
network resource. Therefore, recent event-based estimation methods are applied
herein for resource-aware IoT control allowing agents to decide online whether
communication with other agents is needed, or not. While this can reduce
network traffic significantly, a severe limitation of typical event-based
approaches is the need for instantaneous triggering decisions that leave no
time to reallocate freed resources (e.g., communication slots), which hence
remain unused. To address this problem, novel predictive and self triggering
protocols are proposed herein. From a unified Bayesian decision framework, two
schemes are developed: self triggers that predict, at the current triggering
instant, the next one; and predictive triggers that check at every time step,
whether communication will be needed at a given prediction horizon. The
suitability of these triggers for feedback control is demonstrated in hardware
experiments on a cart-pole, and scalability is discussed with a multi-vehicle
simulation.Comment: 16 pages, 15 figures, accepted article to appear in IEEE Internet of
Things Journal. arXiv admin note: text overlap with arXiv:1609.0753
A middleware for a large array of cameras
Large arrays of cameras are increasingly being employed for producing high quality image sequences needed for motion analysis research. This leads to the logistical problem with coordination and control of a large number of cameras. In this paper, we used a lightweight multi-agent system for coordinating such camera arrays. The agent framework provides more than a remote sensor access API. It allows reconfigurable and transparent access to cameras, as well as software agents capable of intelligent processing. Furthermore, it eases maintenance by encouraging code reuse. Additionally, our agent system includes an automatic discovery mechanism at startup, and multiple language bindings. Performance tests showed the lightweight nature of the framework while validating its correctness and scalability. Two different camera agents were implemented to provide access to a large array of distributed cameras. Correct operation of these camera agents was confirmed via several image processing agents
- …