1,607 research outputs found
Recommended from our members
Application of Techniques for MAP Estimation to Distributed Constraint Optimization Problem
The problem of efficiently finding near-optimal decisions in multi-agent systems has become increasingly important because of the growing number of multi-agent applications with large numbers of agents operating in real-world environments. In these systems, agents are often subject to tight resource constraints and agents have only local views. When agents have non-global constraints, each of which is independent, the problem can be formalized as a distributed constraint optimization problem (DCOP). The DCOP is closely associated with the problem of inference on graphical models. Many approaches from inference literature have been adopted to solve DCOPs. We focus on the Max-Sum algorithm and the Action-GDL algorithm that are DCOP variants of the popular inference algorithm called the Max-Product algorithm and the Belief Propagation algorithm respectively. The Max-Sum algorithm and the Action-GDL algorithm are well-suited for multi-agent systems because it is distributed by nature and requires less communication than most DCOP algorithms. However, the resource requirements of these algorithms are still high for some multi-agent domains and various aspects of the algorithms have not been well studied for use in general multi-agent settings.
This thesis is concerned with a variety of issues of applying the Max-Sum algorithms and the Action-GDL algorithm to general multi-agent settings. We develop a hybrid algorithm of ADOPT and Action-GDL in order to overcome the communication complexity of DCOPs. Secondly, we extend the Max-Sum algorithm to operate more efficiently in more general multi-agent settings in which computational complexity is high. We provide an algorithm that has a lower expected computational complexity for DCOPs even with n-ary constraints. Finally, In most DCOP literature, a one-to-one mapping between a variable and an agent is assumed. However, in real applications, many-to-one mappings are prevalent and can also be beneficial in terms of communication and hardware cost in situations where agents are acting as independent computing units. We consider how to exploit such mapping in order to increase efficiency
Dynamic Continuous Distributed Constraint Optimization Problems
The Distributed Constraint Optimization Problem (DCOP) formulation is a powerful tool to model multi-agent coordination problems that are distributed by nature. The formulation is suitable for problems where the environment does not change over time and where agents seek their value assignment from a discrete domain. However, in many real-world applications, agents often interact in a more dynamic environment and their variables usually require a more complex domain. Thus, the DCOP formulation lacks the capabilities to model the problems in such dynamic and complex environments. To address these limitations, researchers have proposed Dynamic DCOPs (D-DCOPs) to model how DCOPs dynamically change over time and Continuous DCOPs (C-DCOPs) to model DCOPs with continuous variables. The two models address the limitations of DCOPs but in isolation, and thus, it remains a challenge to model problems that have continuous variables and are in a dynamic environment. Therefore, this dissertation investigates a novel formulation that addresses the two limitations of DCOPs together by modeling both dynamic nature of the environment and continuous nature of the variables. Firstly, we propose Proactive Dynamic DCOPs (PD-DCOPs) which model and solve DCOPs in dynamic environment in a proactive manner. Secondly, we propose several C-DCOP algorithms that are efficient and we provide quality guarantee on their solution. Finally, we propose Dynamic Continuous DCOP (DC-DCOP), a novel formulation that models the DCOPs with continuous variables in a dynamic environment
1994 Science Information Management and Data Compression Workshop
This document is the proceedings from the 'Science Information Management and Data Compression Workshop,' which was held on September 26-27, 1994, at the NASA Goddard Space Flight Center, Greenbelt, Maryland. The Workshop explored promising computational approaches for handling the collection, ingestion, archival and retrieval of large quantities of data in future Earth and space science missions. It consisted of eleven presentations covering a range of information management and data compression approaches that are being or have been integrated into actual or prototypical Earth or space science data information systems, or that hold promise for such an application. The workshop was organized by James C. Tilton and Robert F. Cromp of the NASA Goddard Space Flight Center
On Realization of Intelligent Decision-Making in the Real World: A Foundation Decision Model Perspective
Our situated environment is full of uncertainty and highly dynamic, thus
hindering the widespread adoption of machine-led Intelligent Decision-Making
(IDM) in real world scenarios. This means IDM should have the capability of
continuously learning new skills and efficiently generalizing across wider
applications. IDM benefits from any new approaches and theoretical
breakthroughs that exhibit Artificial General Intelligence (AGI) breaking the
barriers between tasks and applications. Recent research has well-examined
neural architecture, Transformer, as a backbone foundation model and its
generalization to various tasks, including computer vision, natural language
processing, and reinforcement learning. We therefore argue that a foundation
decision model (FDM) can be established by formulating various decision-making
tasks as a sequence decoding task using the Transformer architecture; this
would be a promising solution to advance the applications of IDM in more
complex real world tasks. In this paper, we elaborate on how a foundation
decision model improves the efficiency and generalization of IDM. We also
discuss potential applications of a FDM in multi-agent game AI, production
scheduling, and robotics tasks. Finally, through a case study, we demonstrate
our realization of the FDM, DigitalBrain (DB1) with 1.2 billion parameters,
which achieves human-level performance over 453 tasks, including text
generation, images caption, video games playing, robotic control, and traveling
salesman problems. As a foundation decision model, DB1 would be a baby step
towards more autonomous and efficient real world IDM applications.Comment: 26 pages, 4 figure
Operating system fault tolerance support for real-time embedded applications
Tese de doutoramento em Electrónica Industrial (ramo de conhecimento em Informática Industrial)Fault tolerance is a means of achieving high dependability for critical and highavailability
systems. Despite the efforts to prevent and remove faults during the
development of these systems, the application of fault tolerance is usually required
because the hardware may fail during system operation and software faults are very
hard to eliminate completely.
One of the difficulties in implementing fault tolerance techniques is the lack of
support from operating systems and middleware. In most fault tolerant projects, the
programmer has to develop a fault tolerance implementation for each application.
This strong customization makes the fault-tolerant software costly and difficult to
implement and maintain. In particular, for small-scale embedded systems, the
introduction of fault tolerance techniques may also have impact on their restricted
resources, such as processing power and memory size.
The purpose of this research is to provide fault tolerance support for real-time
applications in small-scale embedded systems. The main approach of this thesis is to
develop and integrate a customizable and extendable fault tolerance framework into a
real-time operating system, in order to fulfill the needs of a large range of dependable
applications. Special attention is taken to allow the coexistence of fault tolerance with
real-time constraints. The utilization of the proposed framework features several
advantages over ad-hoc implementations, such as simplifying application-level
programming and improving the system configurability and maintainability.
In addition, this thesis also investigates the application of aspect-oriented
techniques to the development of real-time embedded fault-tolerant software. Aspect-
Oriented Programming (AOP) is employed to modularize all fault tolerant source code, following the principle of separation of concerns, and to integrate the proposed
framework into the operating system.
Two case studies are used to evaluate the proposed implementation in terms of
performance and resource costs. The results show that the overheads related to the
framework application are acceptable and the ones related to the AOP implementation
are negligible.Tolerância a falhas é um meio de obter-se alta confiabilidade para sistemas
críticos e de elevada disponibilidade. Apesar dos esforços para prevenir e remover
falhas durante o desenvolvimento destes sistemas, a aplicação de tolerância a falhas é
normalmente necessária, já que o hardware pode falhar durante a operação do sistema
e falhas de software são muito difíceis de eliminar completamente.
Uma das dificuldades na implementação de técnicas de tolerância a falhas é a
falta de suporte por parte dos sistemas operativos e middleware. Na maioria dos
projectos tolerantes a falhas, o programador deve desenvolver uma implementação de
tolerância a falhas para cada aplicação. Esta elevada adaptação torna o software
tolerante a falhas dispendioso e difícil de implementar e manter. Em particular, para
sistemas embebidos de pequena escala, a introdução de técnicas de tolerância a falhas
pode também ter impacto nos seus restritos recursos, tais como capacidade de
processamento e tamanho da memória.
O propósito desta tese é prover suporte à tolerância a falhas para aplicações de
tempo real em sistemas embebidos de pequena escala. A principal abordagem
utilizada nesta tese foi desenvolver e integrar uma framework tolerante a falhas,
customizável e extensível, a um sistema operativo de tempo real, a fim de satisfazer às
necessidades de uma larga gama de aplicações confiáveis. Especial atenção foi dada
para permitir a coexistência de tolerância a falhas com restrições de tempo real. A
utilização da framework proposta apresenta diversas vantagens sobre implementações
ad-hoc, tais como simplificar a programação a nível da aplicação e melhorar a
configurabilidade e a facilidade de manutenção do sistema.
Além disto, esta tese também investiga a aplicação de técnicas orientadas a
aspectos no desenvolvimento de software tolerante a falhas, embebido e de tempo
real. A Programação Orientada a Aspectos (POA) é empregada para segregar em módulos isolados todo o código fonte tolerante a falhas, seguindo o princípio da
separação de interesses, e para integrar a framework proposta com o sistema
operativo.
Dois casos de estudo são utilizados para avaliar a implementação proposta em
termos de desempenho e utilização de recursos. Os resultados mostram que os
acréscimos de recursos relativos à aplicação da framework são aceitáveis e os
relativos à implementação POA são insignificantes
Resource-aware IoT Control: Saving Communication through Predictive Triggering
The Internet of Things (IoT) interconnects multiple physical devices in
large-scale networks. When the 'things' coordinate decisions and act
collectively on shared information, feedback is introduced between them.
Multiple feedback loops are thus closed over a shared, general-purpose network.
Traditional feedback control is unsuitable for design of IoT control because it
relies on high-rate periodic communication and is ignorant of the shared
network resource. Therefore, recent event-based estimation methods are applied
herein for resource-aware IoT control allowing agents to decide online whether
communication with other agents is needed, or not. While this can reduce
network traffic significantly, a severe limitation of typical event-based
approaches is the need for instantaneous triggering decisions that leave no
time to reallocate freed resources (e.g., communication slots), which hence
remain unused. To address this problem, novel predictive and self triggering
protocols are proposed herein. From a unified Bayesian decision framework, two
schemes are developed: self triggers that predict, at the current triggering
instant, the next one; and predictive triggers that check at every time step,
whether communication will be needed at a given prediction horizon. The
suitability of these triggers for feedback control is demonstrated in hardware
experiments on a cart-pole, and scalability is discussed with a multi-vehicle
simulation.Comment: 16 pages, 15 figures, accepted article to appear in IEEE Internet of
Things Journal. arXiv admin note: text overlap with arXiv:1609.0753
- …