4 research outputs found
Recommended from our members
Optimization Based Control for Multi-agent System with Interaction
Recently, the artificial intelligence has achieved a significant success with applications in various domains including transportation, smart building, robotics, economy and so on. More and more traditional system entities have been entitled with full or partial autonomy, allowing them to make their own decisions and moves based on the specific surrounding environments. An integration of multiple such intelligent entities is called a multi-agent system (MAS) where the agents need to interact with each other effectively and efficiently to attain cooperation and optimal system performance. As to fulfill this more challenging intelligent interaction objective, the traditional control approaches will not suffice and more advanced algorithms become essential.In this dissertation, three system structures for interactive control systems, centralized, distributed and decentralized, are discussed with application in intelligent building and autonomous driving. Several concrete interactive control algorithms are proposed and verified.In the centralized control system, a single central agent with the whole system information available is in charge of making decisions for all the agents. The systemwise cooperation solution is thus directly obtained and all the interactions involved are optimally addressed. Chapter 3 and 4 adopt such centralized control strategy for the intelligent building system. In order to save energy consumption and satisfy the occupants' thermal comfort demand, a combination of feedforward iterative learning control (ILC) and iteratively tuned feedback controller is designed to compensate both repetitive and non-repetitive disturbance components. Chapter 3 proposes an iterative controller design algorithm via optimization solving and stabilizing feedback projection. In Chapter 4, the concurrent design of feedforward ILC and causal stabilizing feedback controller is introduced, where both controllers are simultaneously solved by one optimization.However, the centralized approach's complexity grows with the problem size, which leads to failure for large-scale systems. The distributed control strategy is introduced as an alternative for such high-dimensional control problems. In the distributed system, a communication network enables the information exchange among agents. Therefore, each agent can keep broadcasting and updating its local controller until a convergence to the cooperative solution is reached. In Chapter 5, a distributed cooperative controller design method is developed for intelligent building thermal control with convergence property theoretically proven.For a system with no global communication, agents of which follow different control policies, the decentralized control structure is the only valid solution, where each agent designs its local controller independently based on estimated information of others. In Part II of the dissertation, several decentralized interactive control algorithms are proposed for the autonomous driving system. In Chapter 6, an optimization-based negotiation with both concession and persuasion is formulated for vehicle agent's decision making in various interactive scenarios. A Bayesian persuasion based algorithm for interactive driving is explored in Chapter 7. In the algorithm, the ego vehicle agent (persuader) intends to manipulate the interacting vehicle agent (information receiver)'s belief about the current driving situation via observable driving behavior. In Chapter 8, the interaction between two vehicle agents is defined as a two-player persuasion game, the mixed Nash equilibrium of which denotes the agents' optimal intention probabilities. The optimal intention is then expressed via the ego vehicle's driving trajectory planned by an optimization with the intention expression constraint
Almost-Bayesian Quadratic Persuasion (Extended Version)
In this article, we relax the Bayesianity assumption in the now-traditional
model of Bayesian Persuasion introduced by Kamenica & Gentzkow. Unlike
preexisting approaches -- which have tackled the possibility of the receiver
(Bob) being non-Bayesian by considering that his thought process is not
Bayesian yet known to the sender (Alice), possibly up to a parameter -- we let
Alice merely assume that Bob behaves 'almost like' a Bayesian agent, in some
sense, without resorting to any specific model.
Under this assumption, we study Alice's strategy when both utilities are
quadratic and the prior is isotropic. We show that, contrary to the Bayesian
case, Alice's optimal response may not be linear anymore. This fact is
unfortunate as linear policies remain the only ones for which the induced
belief distribution is known. What is more, evaluating linear policies proves
difficult except in particular cases, let alone finding an optimal one.
Nonetheless, we derive bounds that prove linear policies are near-optimal and
allow Alice to compute a near-optimal linear policy numerically. With this
solution in hand, we show that Alice shares less information with Bob as he
departs more from Bayesianity, much to his detriment.Comment: This version extends the article submitted to the IEEE Transactions
on Automatic Contro