30,033 research outputs found

    Evaluating Matrix Circuits

    Full text link
    The circuit evaluation problem (also known as the compressed word problem) for finitely generated linear groups is studied. The best upper bound for this problem is coRP\mathsf{coRP}, which is shown by a reduction to polynomial identity testing. Conversely, the compressed word problem for the linear group SL3(Z)\mathsf{SL}_3(\mathbb{Z}) is equivalent to polynomial identity testing. In the paper, it is shown that the compressed word problem for every finitely generated nilpotent group is in DETNC2\mathsf{DET} \subseteq \mathsf{NC}^2. Within the larger class of polycyclic groups we find examples where the compressed word problem is at least as hard as polynomial identity testing for skew arithmetic circuits

    Four Lessons in Versatility or How Query Languages Adapt to the Web

    Get PDF
    Exposing not only human-centered information, but machine-processable data on the Web is one of the commonalities of recent Web trends. It has enabled a new kind of applications and businesses where the data is used in ways not foreseen by the data providers. Yet this exposition has fractured the Web into islands of data, each in different Web formats: Some providers choose XML, others RDF, again others JSON or OWL, for their data, even in similar domains. This fracturing stifles innovation as application builders have to cope not only with one Web stack (e.g., XML technology) but with several ones, each of considerable complexity. With Xcerpt we have developed a rule- and pattern based query language that aims to give shield application builders from much of this complexity: In a single query language XML and RDF data can be accessed, processed, combined, and re-published. Though the need for combined access to XML and RDF data has been recognized in previous work (including the W3C’s GRDDL), our approach differs in four main aspects: (1) We provide a single language (rather than two separate or embedded languages), thus minimizing the conceptual overhead of dealing with disparate data formats. (2) Both the declarative (logic-based) and the operational semantics are unified in that they apply for querying XML and RDF in the same way. (3) We show that the resulting query language can be implemented reusing traditional database technology, if desirable. Nevertheless, we also give a unified evaluation approach based on interval labelings of graphs that is at least as fast as existing approaches for tree-shaped XML data, yet provides linear time and space querying also for many RDF graphs. We believe that Web query languages are the right tool for declarative data access in Web applications and that Xcerpt is a significant step towards a more convenient, yet highly efficient data access in a “Web of Data”

    Finite state verifiers with constant randomness

    Full text link
    We give a new characterization of NL\mathsf{NL} as the class of languages whose members have certificates that can be verified with small error in polynomial time by finite state machines that use a constant number of random bits, as opposed to its conventional description in terms of deterministic logarithmic-space verifiers. It turns out that allowing two-way interaction with the prover does not change the class of verifiable languages, and that no polynomially bounded amount of randomness is useful for constant-memory computers when used as language recognizers, or public-coin verifiers. A corollary of our main result is that the class of outcome problems corresponding to O(log n)-space bounded games of incomplete information where the universal player is allowed a constant number of moves equals NL.Comment: 17 pages. An improved versio

    Model Checking Parse Trees

    Full text link
    Parse trees are fundamental syntactic structures in both computational linguistics and compilers construction. We argue in this paper that, in both fields, there are good incentives for model-checking sets of parse trees for some word according to a context-free grammar. We put forward the adequacy of propositional dynamic logic (PDL) on trees in these applications, and study as a sanity check the complexity of the corresponding model-checking problem: although complete for exponential time in the general case, we find natural restrictions on grammars for our applications and establish complexities ranging from nondeterministic polynomial time to polynomial space in the relevant cases.Comment: 21 + x page

    Simon's Bounded Rationality. Origins and use in economic theory

    Get PDF
    The paper aims to show how Simon's notion of bounded rationality should be interpreted in the light of its connection with artificial intelligence. This connection points out that bounded rationality is a highly structured concept, and sheds light on several implications of Simon's general views on rationality. Finally, offering three paradigmatic examples, the artic1e presents the view that recent approaches, which refer to Simon's heterodox theory, only partially accept the teachings of their inspirer, splitting bounded rationality from the context of artificl al intelligence.

    Renormalization and Computation II: Time Cut-off and the Halting Problem

    Full text link
    This is the second installment to the project initiated in [Ma3]. In the first Part, I argued that both philosophy and technique of the perturbative renormalization in quantum field theory could be meaningfully transplanted to the theory of computation, and sketched several contexts supporting this view. In this second part, I address some of the issues raised in [Ma3] and provide their development in three contexts: a categorification of the algorithmic computations; time cut--off and Anytime Algorithms; and finally, a Hopf algebra renormalization of the Halting Problem.Comment: 28 page

    A Swiss Pocket Knife for Computability

    Get PDF
    This research is about operational- and complexity-oriented aspects of classical foundations of computability theory. The approach is to re-examine some classical theorems and constructions, but with new criteria for success that are natural from a programming language perspective. Three cornerstones of computability theory are the S-m-ntheorem; Turing's "universal machine"; and Kleene's second recursion theorem. In today's programming language parlance these are respectively partial evaluation, self-interpretation, and reflection. In retrospect it is fascinating that Kleene's 1938 proof is constructive; and in essence builds a self-reproducing program. Computability theory originated in the 1930s, long before the invention of computers and programs. Its emphasis was on delimiting the boundaries of computability. Some milestones include 1936 (Turing), 1938 (Kleene), 1967 (isomorphism of programming languages), 1985 (partial evaluation), 1989 (theory implementation), 1993 (efficient self-interpretation) and 2006 (term register machines). The "Swiss pocket knife" of the title is a programming language that allows efficient computer implementation of all three computability cornerstones, emphasising the third: Kleene's second recursion theorem. We describe experiments with a tree-based computational model aiming for both fast program generation and fast execution of the generated programs.Comment: In Proceedings Festschrift for Dave Schmidt, arXiv:1309.455

    On the Complexity and Performance of Parsing with Derivatives

    Full text link
    Current algorithms for context-free parsing inflict a trade-off between ease of understanding, ease of implementation, theoretical complexity, and practical performance. No algorithm achieves all of these properties simultaneously. Might et al. (2011) introduced parsing with derivatives, which handles arbitrary context-free grammars while being both easy to understand and simple to implement. Despite much initial enthusiasm and a multitude of independent implementations, its worst-case complexity has never been proven to be better than exponential. In fact, high-level arguments claiming it is fundamentally exponential have been advanced and even accepted as part of the folklore. Performance ended up being sluggish in practice, and this sluggishness was taken as informal evidence of exponentiality. In this paper, we reexamine the performance of parsing with derivatives. We have discovered that it is not exponential but, in fact, cubic. Moreover, simple (though perhaps not obvious) modifications to the implementation by Might et al. (2011) lead to an implementation that is not only easy to understand but also highly performant in practice.Comment: 13 pages; 12 figures; implementation at http://bitbucket.org/ucombinator/parsing-with-derivatives/ ; published in PLDI '16, Proceedings of the 37th ACM SIGPLAN Conference on Programming Language Design and Implementation, June 13 - 17, 2016, Santa Barbara, CA, US

    Renormalisation and computation II: time cut-off and the Halting Problem

    No full text
    corecore