78,429 research outputs found
Input Efficiency for Influencing Swarm
Many cooperative control problems ranging from formation following, to rendezvous to flocking can be expressed as consensus problems. The ability of an operator to influence the development of consensus within a swarm therefore provides a basic test of the quality of human-swarm interaction (HSI). Two plausible approaches are : Direct- dictate a desired value to swarm members or Indirect- control or influence one or more swarm members relying on existing control laws to propagate that influence. Both approaches have been followed by HSI researchers. The Indirect case uses standard consensus methods where the operator exerts influence over a few robots and then the swarm reaches a consensus based on its intrinsic rules. The Direct method corresponds to flooding in which the operator directly sends the intention to a subset of the swarm and the command then propagates through the remainder of the swarm as a privileged message. In this paper we compare these two methods regarding their convergence time and properties in noisy and noiseless conditions with static and dynamic graphs. We have found that average consensus method (indirect control) converges much slower than flooding (direct) method but it has more noise tolerance in comparison with simple flooding algorithms. Also, we have found that the convergence time of the consensus method behaves erratically when the graph’s connectivity (Fiedler value) is high
Dynamic Forecasting Behavior by Analysts: Theory and Evidence
We examine the dynamic forecasting behavior of security analysts in response to their prior performance relative to their peers within a continuous time/multi-period framework. Our model predicts a U-shaped relationship between the boldness of an analyst's forecast, that is, the deviation of her forecast from the consensus and her prior relative performance. In other words, analysts who significantly out perform or under perform their peers issue bolder forecasts than intermediate performers. We then test these predictions of our model on observed analyst forecast data. Consistent with our theoretical predictions, we document an approximately U-shaped relationship between analysts' prior relative performance and the deviation of their forecasts from the consensus. Our theory examines the impact of both explicit incentives in the form of compensation structures and implicit incentives in the form of career concerns, on the dynamic forecasting behavior of analysts. Consistent with existing empirical evidence, our results imply that analysts who face greater employment risk (that is, the risk of being fired for poor performance) have greater incentives to herd, that is, issue forecasts that deviate less from the consensus. Our multi-period model allows us to examine the dynamic forecasting behavior of analysts in contrast with the extant two-period models that are static in nature. Moreover, the model also differs significantly from existing theoretical models in that it does not rely on any specific assumptions regarding the existence of asymmetric information and/or differential analyst abilities.Security analysts, herding, career concerns
Resilient Autonomous Control of Distributed Multi-agent Systems in Contested Environments
An autonomous and resilient controller is proposed for leader-follower
multi-agent systems under uncertainties and cyber-physical attacks. The leader
is assumed non-autonomous with a nonzero control input, which allows changing
the team behavior or mission in response to environmental changes. A resilient
learning-based control protocol is presented to find optimal solutions to the
synchronization problem in the presence of attacks and system dynamic
uncertainties. An observer-based distributed H_infinity controller is first
designed to prevent propagating the effects of attacks on sensors and actuators
throughout the network, as well as to attenuate the effect of these attacks on
the compromised agent itself. Non-homogeneous game algebraic Riccati equations
are derived to solve the H_infinity optimal synchronization problem and
off-policy reinforcement learning is utilized to learn their solution without
requiring any knowledge of the agent's dynamics. A trust-confidence based
distributed control protocol is then proposed to mitigate attacks that hijack
the entire node and attacks on communication links. A confidence value is
defined for each agent based solely on its local evidence. The proposed
resilient reinforcement learning algorithm employs the confidence value of each
agent to indicate the trustworthiness of its own information and broadcast it
to its neighbors to put weights on the data they receive from it during and
after learning. If the confidence value of an agent is low, it employs a trust
mechanism to identify compromised agents and remove the data it receives from
them from the learning process. Simulation results are provided to show the
effectiveness of the proposed approach
Online Resource Inference in Network Utility Maximization Problems
The amount of transmitted data in computer networks is expected to grow
considerably in the future, putting more and more pressure on the network
infrastructures. In order to guarantee a good service, it then becomes
fundamental to use the network resources efficiently. Network Utility
Maximization (NUM) provides a framework to optimize the rate allocation when
network resources are limited. Unfortunately, in the scenario where the amount
of available resources is not known a priori, classical NUM solving methods do
not offer a viable solution. To overcome this limitation we design an overlay
rate allocation scheme that attempts to infer the actual amount of available
network resources while coordinating the users rate allocation. Due to the
general and complex model assumed for the congestion measurements, a passive
learning of the available resources would not lead to satisfying performance.
The coordination scheme must then perform active learning in order to speed up
the resources estimation and quickly increase the system performance. By
adopting an optimal learning formulation we are able to balance the tradeoff
between an accurate estimation, and an effective resources exploitation in
order to maximize the long term quality of the service delivered to the users
Fixed-Parameter Algorithms for Computing Kemeny Scores - Theory and Practice
The central problem in this work is to compute a ranking of a set of elements
which is "closest to" a given set of input rankings of the elements. We define
"closest to" in an established way as having the minimum sum of Kendall-Tau
distances to each input ranking. Unfortunately, the resulting problem Kemeny
consensus is NP-hard for instances with n input rankings, n being an even
integer greater than three. Nevertheless this problem plays a central role in
many rank aggregation problems. It was shown that one can compute the
corresponding Kemeny consensus list in f(k) + poly(n) time, being f(k) a
computable function in one of the parameters "score of the consensus", "maximum
distance between two input rankings", "number of candidates" and "average
pairwise Kendall-Tau distance" and poly(n) a polynomial in the input size. This
work will demonstrate the practical usefulness of the corresponding algorithms
by applying them to randomly generated and several real-world data. Thus, we
show that these fixed-parameter algorithms are not only of theoretical
interest. In a more theoretical part of this work we will develop an improved
fixed-parameter algorithm for the parameter "score of the consensus" having a
better upper bound for the running time than previous algorithms.Comment: Studienarbei
- …