8,396 research outputs found
Deception in Optimal Control
In this paper, we consider an adversarial scenario where one agent seeks to
achieve an objective and its adversary seeks to learn the agent's intentions
and prevent the agent from achieving its objective. The agent has an incentive
to try to deceive the adversary about its intentions, while at the same time
working to achieve its objective. The primary contribution of this paper is to
introduce a mathematically rigorous framework for the notion of deception
within the context of optimal control. The central notion introduced in the
paper is that of a belief-induced reward: a reward dependent not only on the
agent's state and action, but also adversary's beliefs. Design of an optimal
deceptive strategy then becomes a question of optimal control design on the
product of the agent's state space and the adversary's belief space. The
proposed framework allows for deception to be defined in an arbitrary control
system endowed with a reward function, as well as with additional
specifications limiting the agent's control policy. In addition to defining
deception, we discuss design of optimally deceptive strategies under
uncertainties in agent's knowledge about the adversary's learning process. In
the latter part of the paper, we focus on a setting where the agent's behavior
is governed by a Markov decision process, and show that the design of optimally
deceptive strategies under lack of knowledge about the adversary naturally
reduces to previously discussed problems in control design on partially
observable or uncertain Markov decision processes. Finally, we present two
examples of deceptive strategies: a "cops and robbers" scenario and an example
where an agent may use camouflage while moving. We show that optimally
deceptive strategies in such examples follow the intuitive idea of how to
deceive an adversary in the above settings
The Philosophical Foundations of PLEN: A Protocol-theoretic Logic of Epistemic Norms
In this dissertation, I defend the protocol-theoretic account of epistemic norms. The protocol-theoretic account amounts to three theses: (i) There are norms of epistemic rationality that are procedural; epistemic rationality is at least partially defined by rules that restrict the possible ways in which epistemic actions and processes can be sequenced, combined, or chosen among under varying conditions. (ii) Epistemic rationality is ineliminably defined by procedural norms; procedural restrictions provide an irreducible unifying structure for even apparently non-procedural prescriptions and normative expressions, and they are practically indispensable in our cognitive lives. (iii) These procedural epistemic norms are best analyzed in terms of the protocol (or program) constructions of dynamic logic.
I defend (i) and (ii) at length and in multi-faceted ways, and I argue that they entail a set of criteria of adequacy for models of epistemic dynamics and abstract accounts of epistemic norms. I then define PLEN, the protocol-theoretic logic of epistemic norms. PLEN is a dynamic logic that analyzes epistemic rationality norms with protocol constructions interpreted over multi-graph based models of epistemic dynamics. The kernel of the overall argument of the dissertation is showing that PLEN uniquely satisfies the criteria defended; none of the familiar, rival frameworks for modeling epistemic dynamics or normative concepts are capable of satisfying these criteria to the same degree as PLEN. The overarching argument of the dissertation is thus a theory-preference argument for PLEN
Betting on the Outcomes of Measurements: A Bayesian Theory of Quantum Probability
We develop a systematic approach to quantum probability as a theory of
rational betting in quantum gambles. In these games of chance the agent is
betting in advance on the outcomes of several (finitely many) incompatible
measurements. One of the measurements is subsequently chosen and performed and
the money placed on the other measurements is returned to the agent. We show
how the rules of rational betting imply all the interesting features of quantum
probability, even in such finite gambles. These include the uncertainty
principle and the violation of Bell's inequality among others. Quantum gambles
are closely related to quantum logic and provide a new semantics to it. We
conclude with a philosophical discussion on the interpretation of quantum
mechanics.Comment: 21 pages, 2 figure
Quantum mechanics as a theory of probability
We develop and defend the thesis that the Hilbert space formalism of quantum
mechanics is a new theory of probability. The theory, like its classical
counterpart, consists of an algebra of events, and the probability measures
defined on it. The construction proceeds in the following steps: (a) Axioms for
the algebra of events are introduced following Birkhoff and von Neumann. All
axioms, except the one that expresses the uncertainty principle, are shared
with the classical event space. The only models for the set of axioms are
lattices of subspaces of inner product spaces over a field K. (b) Another axiom
due to Soler forces K to be the field of real, or complex numbers, or the
quaternions. We suggest a probabilistic reading of Soler's axiom. (c) Gleason's
theorem fully characterizes the probability measures on the algebra of events,
so that Born's rule is derived. (d) Gleason's theorem is equivalent to the
existence of a certain finite set of rays, with a particular orthogonality
graph (Wondergraph). Consequently, all aspects of quantum probability can be
derived from rational probability assignments to finite "quantum gambles". We
apply the approach to the analysis of entanglement, Bell inequalities, and the
quantum theory of macroscopic objects. We also discuss the relation of the
present approach to quantum logic, realism and truth, and the measurement
problem.Comment: 37 pages, 3 figures. Forthcoming in a Festschrift for Jeffrey Bub,
ed. W. Demopoulos and the author, Springer (Kluwer): University of Western
Ontario Series in Philosophy of Scienc
Novel Multidimensional Models of Opinion Dynamics in Social Networks
Unlike many complex networks studied in the literature, social networks
rarely exhibit unanimous behavior, or consensus. This requires a development of
mathematical models that are sufficiently simple to be examined and capture, at
the same time, the complex behavior of real social groups, where opinions and
actions related to them may form clusters of different size. One such model,
proposed by Friedkin and Johnsen, extends the idea of conventional consensus
algorithm (also referred to as the iterative opinion pooling) to take into
account the actors' prejudices, caused by some exogenous factors and leading to
disagreement in the final opinions.
In this paper, we offer a novel multidimensional extension, describing the
evolution of the agents' opinions on several topics. Unlike the existing
models, these topics are interdependent, and hence the opinions being formed on
these topics are also mutually dependent. We rigorous examine stability
properties of the proposed model, in particular, convergence of the agents'
opinions. Although our model assumes synchronous communication among the
agents, we show that the same final opinions may be reached "on average" via
asynchronous gossip-based protocols.Comment: Accepted by IEEE Transaction on Automatic Control (to be published in
May 2017
- …