110 research outputs found

    The submonoid and rational subset membership problems for graph groups

    Get PDF
    We show that the membership problem in a finitely generated submonoid of a graph group (also called a right-angled Artin group or a free partially commutative group) is decidable if and only if the independence graph (commutation graph) is a transitive forest. As a consequence we obtain the first example of a finitely presented group with a decidable generalized word problem that does not have a decidable membership problem for finitely generated submonoids. We also show that the rational subset membership problem is decidable for a graph group if and only if the independence graph is a transitive forest, answering a question of Kambites, Silva, and the second author. Finally we prove that for certain amalgamated free products and HNN-extensions the rational subset and submonoid membership problems are recursively equivalent. In particular, this applies to finitely generated groups with two or more ends that are either torsion-free or residually finite

    Compressed Decision Problems in Groups

    Get PDF
    Wir beschäftigen uns mit Problemen der algorithmischen Gruppentheorie und untersuchen dabei die Komplexität von komprimierten Versionen des Wortproblems und des Konjugationsproblems für endlich erzeugte Gruppen. Das Wortproblem fragt für eine feste, endlich erzeugte Gruppe ob ein gegebenes Wort über der Erzeugermenge das neutrale Element der Gruppe repräsentiert. Wir betrachten das gegebene Wort jedoch in einer komprimierten Form, als Straight-line Program (SLP) und untersuchen die Komplexität dieses Problems, das wir \''komprimiertes Wortproblem\'' nennen. SLPs sind kontextfreie Grammatiken, die genau einen String erzeugen. Die Eingabegröße ist dabei stets die Größe des gegebenen SLPs. Eine Hauptmotivation ist dabei, dass für eine feste endlich erzeugte Gruppe das Wortproblem ihrer Automorphismengruppe durch eine Turingmaschine in Polynomialzeit auf das komprimierte Wortproblem der Gruppe selbst reduzierbar ist. Wir untersuchen das komprimierte Wortproblem für die verbreiteten Gruppenerweiterungen HNN-Erweiterungen (amalgamierte Produkte und Graphprodukte) und können zeigen, dass sich Instanzen des komprimierten Wortproblems von einer Turingmaschine in Polynomialzeit auf Instanzen des komprimierten Wortproblems der Basisgruppe (respektive Basisgruppen und Knotengruppen) reduzieren lassen. Weiterhin zeigen wir, dass das komprimierte Wortproblem für endlich erzeugte nilpotente Gruppen von einer Turingmaschine in Polynomialzeit entscheidbar ist. Wir betrachten außerdem eine komprimierte Variante des Konjugationsproblems. Das unkomprimierte Konjugationsproblem fragt für zwei gegebene Wörter über den Erzeugern einer festen endlich erzeugten Gruppe, ob sie in dieser Gruppe konjugiert sind. Beim komprimierten Konjugationsproblem besteht die Eingabe aus zwei SLPs und es wird gefragt, ob die beiden Wörter die von den SLPs erzeugt werden in der Gruppe konjugierte Elemente präsentieren. Wir konnten zeigen, dass sich das komprimierte Konjugationsproblem für Graphgruppen in Polynomialzeit entscheiden lässt. Weiterhin haben wir das Wortproblem der äußeren Automorphismengruppen von Graphprodukten endlich erzeugter Gruppen untersucht. Durch den engen Zusammenhang des komprimierten Konjugationsproblems einer Gruppe mit dem Wortproblem der äußeren Automorphismengruppe konnten wir zeigen, dass sich das Wortproblem der äußeren Automorphismengruppe eines Graphprodukts von endlich erzeugten Gruppen durch eine Turingmaschine in Polynomialzeit auf Instanzen von simultanen komprimierten Konjugationsproblemen der Knotengruppen und Instanzen von komprimierten Wortproblemen der Knotengruppen reduzieren lässt. Als Anwendung gelten obige Resultate auch für right-angled Coxetergruppen und Graphgruppen, da beide spezielle Graphprodukte sind. So folgt beispielsweise, dass das komprimierte Wortproblem einer right-angled Coxetergruppe in Polynomialzeit entscheidbar ist

    Complexity Lower Bounds for Computing the Approximately-Commuting Operator Value of Non-Local Games to High Precision

    Get PDF
    We study the problem of approximating the commuting-operator value of a two-player non-local game. It is well-known that it is NP-complete to decide whether the classical value of a non-local game is 1 or 1- epsilon, promised that one of the two is the case. Furthermore, as long as epsilon is small enough, this result does not depend on the gap epsilon. In contrast, a recent result of Fitzsimons, Ji, Vidick, and Yuen shows that the complexity of computing the quantum value grows without bound as the gap epsilon decreases. In this paper, we show that this also holds for the commuting-operator value of a game. Specifically, in the language of multi-prover interactive proofs, we show that the power of MIP^{co}(2,1,1,s) (proofs with two provers, one round, completeness probability 1, soundness probability s, and commuting-operator strategies) can increase without bound as the gap 1-s gets arbitrarily small. Our results also extend naturally in two ways, to perfect zero-knowledge protocols, and to lower bounds on the complexity of computing the approximately-commuting value of a game. Thus we get lower bounds on the complexity class PZK-MIP^{co}_{delta}(2,1,1,s) of perfect zero-knowledge multi-prover proofs with approximately-commuting operator strategies, as the gap 1-s gets arbitrarily small. While we do not know any computable time upper bound on the class MIP^{co}, a result of the first author and Vidick shows that for s = 1-1/poly(f(n)) and delta = 1/poly(f(n)), the class MIP^{co}_delta(2,1,1,s), with constant communication from the provers, is contained in TIME(exp(poly(f(n)))). We give a lower bound of coNTIME(f(n)) (ignoring constants inside the function) for this class, which is tight up to polynomial factors assuming the exponential time hypothesis

    Hidden Markov models and neural networks for speech recognition

    Get PDF
    The Hidden Markov Model (HMMs) is one of the most successful modeling approaches for acoustic events in speech recognition, and more recently it has proven useful for several problems in biological sequence analysis. Although the HMM is good at capturing the temporal nature of processes such as speech, it has a very limited capacity for recognizing complex patterns involving more than first order dependencies in the observed data sequences. This is due to the first order state process and the assumption of state conditional independence between observations. Artificial Neural Networks (NNs) are almost the opposite: they cannot model dynamic, temporally extended phenomena very well, but are good at static classification and regression tasks. Combining the two frameworks in a sensible way can therefore lead to a more powerful model with better classification abilities. The overall aim of this work has been to develop a probabilistic hybrid of hidden Markov models and neural networks and ..

    Heterogeneous neural networks: theory and applications

    Get PDF
    Aquest treball presenta una classe de funcions que serveixen de models neuronals generalitzats per ser usats en xarxes neuronals artificials. Es defineixen com una mesura de similitud que actúa com una definició flexible de neurona vista com un reconeixedor de patrons. La similitud proporciona una marc conceptual i serveix de cobertura unificadora de molts models neuronals de la literatura i d'exploració de noves instàncies de models de neurona. La visió basada en similitud porta amb naturalitat a integrar informació heterogènia, com ara quantitats contínues i discretes (nominals i ordinals), i difuses ó imprecises. Els valors perduts es tracten de manera explícita. Una neurona d'aquesta classe s'anomena neurona heterogènia i qualsevol arquitectura neuronal que en faci ús serà una Xarxa Neuronal Heterogènia.En aquest treball ens concentrem en xarxes neuronals endavant, com focus inicial d'estudi. Els algorismes d'aprenentatge són basats en algorisms evolutius, especialment extesos per treballar amb informació heterogènia. En aquesta tesi es descriu com una certa classe de neurones heterogènies porten a xarxes neuronals que mostren un rendiment molt satisfactori, comparable o superior al de xarxes neuronals tradicionals (com el perceptró multicapa ó la xarxa de base radial), molt especialment en presència d'informació heterogènia, usual en les bases de dades actuals.This work presents a class of functions serving as generalized neuron models to be used in artificial neural networks. They are cast into the common framework of computing a similarity function, a flexible definition of a neuron as a pattern recognizer. The similarity endows the model with a clear conceptual view and serves as a unification cover for many of the existing neural models, including those classically used for the MultiLayer Perceptron (MLP) and most of those used in Radial Basis Function Networks (RBF). These families of models are conceptually unified and their relation is clarified. The possibilities of deriving new instances are explored and several neuron models --representative of their families-- are proposed. The similarity view naturally leads to further extensions of the models to handle heterogeneous information, that is to say, information coming from sources radically different in character, including continuous and discrete (ordinal) numerical quantities, nominal (categorical) quantities, and fuzzy quantities. Missing data are also explicitly considered. A neuron of this class is called an heterogeneous neuron and any neural structure making use of them is an Heterogeneous Neural Network (HNN), regardless of the specific architecture or learning algorithm. Among them, in this work we concentrate on feed-forward networks, as the initial focus of study. The learning procedures may include a great variety of techniques, basically divided in derivative-based methods (such as the conjugate gradient)and evolutionary ones (such as variants of genetic algorithms).In this Thesis we also explore a number of directions towards the construction of better neuron models --within an integrant envelope-- more adapted to the problems they are meant to solve.It is described how a certain generic class of heterogeneous models leads to a satisfactory performance, comparable, and often better, to that of classical neural models, especially in the presence of heterogeneous information, imprecise or incomplete data, in a wide range of domains, most of them corresponding to real-world problems.Postprint (published version

    Dagstuhl Reports : Volume 1, Issue 2, February 2011

    Get PDF
    Online Privacy: Towards Informational Self-Determination on the Internet (Dagstuhl Perspectives Workshop 11061) : Simone Fischer-Hübner, Chris Hoofnagle, Kai Rannenberg, Michael Waidner, Ioannis Krontiris and Michael Marhöfer Self-Repairing Programs (Dagstuhl Seminar 11062) : Mauro Pezzé, Martin C. Rinard, Westley Weimer and Andreas Zeller Theory and Applications of Graph Searching Problems (Dagstuhl Seminar 11071) : Fedor V. Fomin, Pierre Fraigniaud, Stephan Kreutzer and Dimitrios M. Thilikos Combinatorial and Algorithmic Aspects of Sequence Processing (Dagstuhl Seminar 11081) : Maxime Crochemore, Lila Kari, Mehryar Mohri and Dirk Nowotka Packing and Scheduling Algorithms for Information and Communication Services (Dagstuhl Seminar 11091) Klaus Jansen, Claire Mathieu, Hadas Shachnai and Neal E. Youn

    High Efficiency Real-Time Sensor and Actuator Control and Data Processing

    Get PDF
    The advances in sensor and actuator technology foster the use of large multitransducer networks in many different fields. The increasing complexity of such networks poses problems in data processing, especially when high-efficiency is required for real-time applications. In fact, multi-transducer data processing usually consists of interconnection and co-operation of several modules devoted to process different tasks. Multi-transducer network modules often include tasks such as control, data acquisition, data filtering interfaces, feature selection and pattern analysis. Heterogeneous techniques derived from chemometrics, neural networks, fuzzy-rules used to implement such tasks may introduce module interconnection and co-operation issues. To help dealing with these problems the author here presents a software library architecture for a dynamic and efficient management of multi-transducer data processing and control techniques. The framework’s base architecture and the implementation details of several extensions are described. Starting from the base models available in the framework core dedicated models for control processes and neural network tools have been derived. The Facial Automaton for Conveying Emotion (FACE) has been used as a test field for the control architecture

    Machine Learning

    Get PDF
    Machine Learning can be defined in various ways related to a scientific domain concerned with the design and development of theoretical and implementation tools that allow building systems with some Human Like intelligent behavior. Machine learning addresses more specifically the ability to improve automatically through experience
    • …
    corecore