248,103 research outputs found
Sequential decomposition of propositional logic programs
The sequential composition of propositional logic programs has been recently
introduced. This paper studies the sequential {\em decomposition} of programs
by studying Green's relations -- well-known in semigroup
theory -- between programs. In a broader sense, this paper is a further step
towards an algebraic theory of logic programming.Comment: arXiv admin note: text overlap with arXiv:2109.05300,
arXiv:2009.0577
Quarks and Leptons between Branes and Bulk
We study a supersymmetric SO(10) gauge theory in six dimensions compactified
on an orbifold. Three sequential quark-lepton families are localized at the
three fixpoints where SO(10) is broken to its three GUT subgroups. Split bulk
multiplets yield the Higgs doublets of the standard model and as additional
states lepton doublets and down-quark singlets. The physical quarks and leptons
are mixtures of brane and bulk states. The model naturally explains small quark
mixings together with large lepton mixings in the charged current. A small
hierarchy of neutrino masses is obtained due to the different down-quark and
up-quark mass hierarchies. None of the usual GUT relations between fermion
masses holds exactly.Comment: 12 pages, 1 figur
Sequential random access codes and self-testing of quantum measurement instruments
Quantum Random Access Codes (QRACs) are key tools for a variety of protocols
in quantum information theory. These are commonly studied in
prepare-and-measure scenarios in which a sender prepares states and a receiver
measures them. Here, we consider a three-party prepare-transform-measure
scenario in which the simplest QRAC is implemented twice in sequence based on
the same physical system. We derive optimal trade-off relations between the two
QRACs. We apply our results to construct semi-device independent self-tests of
quantum instruments, i.e. measurement channels with both a classical and
quantum output. Finally, we show how sequential QRACs enable inference of upper
and lower bounds on the sharpness parameter of a quantum instrument
Recommended from our members
Average Case ϵ-Complexity in Computer Science: A Bayesian View
Relations between average case ϵ-complexity and Bayesian statistics are discussed. An algorithm corresponds to a decision function, and the choice of information to the choice of an experiment. Adaptive information in ϵ-complexity theory corresponds to the concept of sequential experiment. Some results are reported, giving ϵ-complexity and minimax-Bayesian interpretations for factor analysis. Results from ϵ-complexity are used to establish that the optimal sequential design is no better than optimal nonsequential design for that problem
Infinite sequential Nash equilibrium
In game theory, the concept of Nash equilibrium reflects the collective
stability of some individual strategies chosen by selfish agents. The concept
pertains to different classes of games, e.g. the sequential games, where the
agents play in turn. Two existing results are relevant here: first, all finite
such games have a Nash equilibrium (w.r.t. some given preferences) iff all the
given preferences are acyclic; second, all infinite such games have a Nash
equilibrium, if they involve two agents who compete for victory and if the
actual plays making a given agent win (and the opponent lose) form a
quasi-Borel set. This article generalises these two results via a single
result. More generally, under the axiomatic of Zermelo-Fraenkel plus the axiom
of dependent choice (ZF+DC), it proves a transfer theorem for infinite
sequential games: if all two-agent win-lose games that are built using a
well-behaved class of sets have a Nash equilibrium, then all multi-agent
multi-outcome games that are built using the same well-behaved class of sets
have a Nash equilibrium, provided that the inverse relations of the agents'
preferences are strictly well-founded.Comment: 14 pages, will be published in LMCS-2011-65
Acyclicity of Preferences, Nash Equilibria, and Subgame Perfect Equilibria: a Formal and Constructive Equivalence
In 1953, Kuhn showed that every sequential game has a Nash equilibrium by showing that a procedure, named ``backward induction'' in game theory, yields a Nash equilibrium. It actually yields Nash equilibria that define a proper subclass of Nash equilibria. In 1965, Selten named this proper subclass subgame perfect equilibria. In game theory, payoffs are rewards usually granted at the end of a game. Although traditional game theory mainly focuses on real-valued payoffs that are implicitly ordered by the usual total order over the reals, works of Simon or Blackwell already involved partially ordered payoffs. This paper generalises the notion of sequential game by replacing real-valued payoff functions with abstract atomic objects, called outcomes, and by replacing the usual total order over the reals with arbitrary binary relations over outcomes, called preferences. This introduces a general abstract formalism where Nash equilibrium, subgame perfect equilibrium, and ``backward induction'' can still be defined. This paper proves that the following three propositions are equivalent: 1) Preferences over the outcomes are acyclic. 2) Every sequential game has a Nash equilibrium. 3) Every sequential game has a subgame perfect equilibrium. The result is fully computer-certified using Coq. Beside the additional guarantee of correctness, the activity of formalisation using Coq also helps clearly identify the useful definitions and the main articulations of the proof
Universal Algorithmic Intelligence: A mathematical top->down approach
Sequential decision theory formally solves the problem of rational agents in
uncertain worlds if the true environmental prior probability distribution is
known. Solomonoff's theory of universal induction formally solves the problem
of sequence prediction for unknown prior distribution. We combine both ideas
and get a parameter-free theory of universal Artificial Intelligence. We give
strong arguments that the resulting AIXI model is the most intelligent unbiased
agent possible. We outline how the AIXI model can formally solve a number of
problem classes, including sequence prediction, strategic games, function
minimization, reinforcement and supervised learning. The major drawback of the
AIXI model is that it is uncomputable. To overcome this problem, we construct a
modified algorithm AIXItl that is still effectively more intelligent than any
other time t and length l bounded agent. The computation time of AIXItl is of
the order t x 2^l. The discussion includes formal definitions of intelligence
order relations, the horizon problem and relations of the AIXI theory to other
AI approaches.Comment: 70 page
A longitudinal project of new venture teamwork and outcomes
This chapter present a research project dedicated to better understand how new venture teams work together to achieve desired outcomes. Teams, as opposed to an individual, start a majority of all innovative new ventures. Yet, little research or theory exists in new venture settings about how members interact with each other over time—teamwork—to produce innovative technologies, products, and services. We believe a systematic study of social and psychological processes that underlie new venture teamwork and venture outcomes is timely and important. Unique features of our research project include: (1) a team level focus on social and psychological processes, to assess relations to proximal (e.g., innovation, first sales and team satisfaction), and distal value creation outcomes (e.g., sales growth, raised capital and profits). (2) Combined qualitative and quantitative research methodologies to provide both theory building and theory testing for the relations of interest. (3) A time-sequential design with data collection every three months over one year to allow us to investigate the relations of interest for new ventures
- …