109,549 research outputs found
Regular Boardgames
We propose a new General Game Playing (GGP) language called Regular
Boardgames (RBG), which is based on the theory of regular languages. The
objective of RBG is to join key properties as expressiveness, efficiency, and
naturalness of the description in one GGP formalism, compensating certain
drawbacks of the existing languages. This often makes RBG more suitable for
various research and practical developments in GGP. While dedicated mostly for
describing board games, RBG is universal for the class of all finite
deterministic turn-based games with perfect information. We establish
foundations of RBG, and analyze it theoretically and experimentally, focusing
on the efficiency of reasoning. Regular Boardgames is the first GGP language
that allows efficient encoding and playing games with complex rules and with
large branching factor (e.g.\ amazons, arimaa, large chess variants, go,
international checkers, paper soccer).Comment: AAAI 201
Gaming security by obscurity
Shannon sought security against the attacker with unlimited computational
powers: *if an information source conveys some information, then Shannon's
attacker will surely extract that information*. Diffie and Hellman refined
Shannon's attacker model by taking into account the fact that the real
attackers are computationally limited. This idea became one of the greatest new
paradigms in computer science, and led to modern cryptography.
Shannon also sought security against the attacker with unlimited logical and
observational powers, expressed through the maxim that "the enemy knows the
system". This view is still endorsed in cryptography. The popular formulation,
going back to Kerckhoffs, is that "there is no security by obscurity", meaning
that the algorithms cannot be kept obscured from the attacker, and that
security should only rely upon the secret keys. In fact, modern cryptography
goes even further than Shannon or Kerckhoffs in tacitly assuming that *if there
is an algorithm that can break the system, then the attacker will surely find
that algorithm*. The attacker is not viewed as an omnipotent computer any more,
but he is still construed as an omnipotent programmer.
So the Diffie-Hellman step from unlimited to limited computational powers has
not been extended into a step from unlimited to limited logical or programming
powers. Is the assumption that all feasible algorithms will eventually be
discovered and implemented really different from the assumption that everything
that is computable will eventually be computed? The present paper explores some
ways to refine the current models of the attacker, and of the defender, by
taking into account their limited logical and programming powers. If the
adaptive attacker actively queries the system to seek out its vulnerabilities,
can the system gain some security by actively learning attacker's methods, and
adapting to them?Comment: 15 pages, 9 figures, 2 tables; final version appeared in the
Proceedings of New Security Paradigms Workshop 2011 (ACM 2011); typos
correcte
Logical Reduction of Metarules
International audienceMany forms of inductive logic programming (ILP) use metarules, second-order Horn clauses, to define the structure of learnable programs and thus the hypothesis space. Deciding which metarules to use for a given learning task is a major open problem and is a trade-off between efficiency and expressivity: the hypothesis space grows given more metarules, so we wish to use fewer metarules, but if we use too few metarules then we lose expressivity. In this paper, we study whether fragments of metarules can be logically reduced to minimal finite subsets. We consider two traditional forms of logical reduction: subsumption and entailment. We also consider a new reduction technique called derivation reduction, which is based on SLD-resolution. We compute reduced sets of metarules for fragments relevant to ILP and theoretically show whether these reduced sets are reductions for more general infinite fragments. We experimentally compare learning with reduced sets of metarules on three domains: Michalski trains, string transformations, and game rules. In general, derivation reduced sets of metarules outperform subsumption and entailment reduced sets, both in terms of predictive accuracies and learning times
A New Constructivist AI: From Manual Methods to Self-Constructive Systems
The development of artificial intelligence (AI) systems has to date been largely one of manual labor. This constructionist approach to AI has resulted in systems with limited-domain application and severe performance brittleness. No AI architecture to date incorporates, in a single system, the many features that make natural intelligence general-purpose, including system-wide attention, analogy-making, system-wide learning, and various other complex transversal functions. Going beyond current AI systems will require significantly more complex system architecture than attempted to date. The heavy reliance on direct human specification and intervention in constructionist AI brings severe theoretical and practical limitations to any system built that way.
One way to address the challenge of artificial general intelligence (AGI) is replacing a top-down architectural design approach with methods that allow the system to manage its own growth. This calls for a fundamental shift from hand-crafting to self-organizing architectures and self-generated code – what we call a constructivist AI approach, in reference to the self-constructive principles on which it must be based. Methodologies employed for constructivist AI will be very different from today’s software development methods; instead of relying on direct design of mental functions and their implementation in a cog- nitive architecture, they must address the principles – the “seeds” – from which a cognitive architecture can automatically grow. In this paper I describe the argument in detail and examine some of the implications of this impending paradigm shift
Complex Systems: A Survey
A complex system is a system composed of many interacting parts, often called
agents, which displays collective behavior that does not follow trivially from
the behaviors of the individual parts. Examples include condensed matter
systems, ecosystems, stock markets and economies, biological evolution, and
indeed the whole of human society. Substantial progress has been made in the
quantitative understanding of complex systems, particularly since the 1980s,
using a combination of basic theory, much of it derived from physics, and
computer simulation. The subject is a broad one, drawing on techniques and
ideas from a wide range of areas. Here I give a survey of the main themes and
methods of complex systems science and an annotated bibliography of resources,
ranging from classic papers to recent books and reviews.Comment: 10 page
Quantum de Finetti Theorems under Local Measurements with Applications
Quantum de Finetti theorems are a useful tool in the study of correlations in
quantum multipartite states. In this paper we prove two new quantum de Finetti
theorems, both showing that under tests formed by local measurements one can
get a much improved error dependence on the dimension of the subsystems. We
also obtain similar results for non-signaling probability distributions. We
give the following applications of the results:
We prove the optimality of the Chen-Drucker protocol for 3-SAT, under the
exponential time hypothesis.
We show that the maximum winning probability of free games can be estimated
in polynomial time by linear programming. We also show that 3-SAT with m
variables can be reduced to obtaining a constant error approximation of the
maximum winning probability under entangled strategies of O(m^{1/2})-player
one-round non-local games, in which the players communicate O(m^{1/2}) bits all
together.
We show that the optimization of certain polynomials over the hypersphere can
be performed in quasipolynomial time in the number of variables n by
considering O(log(n)) rounds of the Sum-of-Squares (Parrilo/Lasserre) hierarchy
of semidefinite programs. As an application to entanglement theory, we find a
quasipolynomial-time algorithm for deciding multipartite separability.
We consider a result due to Aaronson -- showing that given an unknown n qubit
state one can perform tomography that works well for most observables by
measuring only O(n) independent and identically distributed (i.i.d.) copies of
the state -- and relax the assumption of having i.i.d copies of the state to
merely the ability to select subsystems at random from a quantum multipartite
state.
The proofs of the new quantum de Finetti theorems are based on information
theory, in particular on the chain rule of mutual information.Comment: 39 pages, no figure. v2: changes to references and other minor
improvements. v3: added some explanations, mostly about Theorem 1 and
Conjecture 5. STOC version. v4, v5. small improvements and fixe
- …