656 research outputs found

    Which Regular Languages can be Efficiently Indexed?

    Full text link
    In the present work, we tackle the regular language indexing problem by first studying the hierarchy of pp-sortable languages: regular languages accepted by automata of width pp. We show that the hierarchy is strict and does not collapse, and provide (exponential in pp) upper and lower bounds relating the minimum widths of equivalent NFAs and DFAs. Our bounds indicate the importance of being able to index NFAs, as they enable indexing regular languages with much faster and smaller indexes. Our second contribution solves precisely this problem, optimally: we devise a polynomial-time algorithm that indexes any NFA with the optimal value pp for its width, without explicitly computing pp (NP-hard to find). In particular, this implies that we can index in polynomial time the well-studied case p=1p=1 (Wheeler NFAs). More in general, in polynomial time we can build an index breaking the worst-case conditional lower bound of Ω(Pm)\Omega(|P| m), whenever the input NFA's width is po(m)p \in o(\sqrt{m}).Comment: Extended versio

    A novel computer Scrabble engine based on probability that performs at championship leve

    Get PDF
    The thesis starts by giving an introduction to the game of Scrabble, then mentions state-of-the-art computer Scrabble programs and presents some characteristics of our developed Scrabble engine Heuri. Some brief notions of Game Theory are given, along with history of some games in Artificial Intelligence; the fundamental algorithms for game playing, as well as state-of-the-art engines and the algorithms used by them, are presented. Basic elements of Scrabble, such as the Scrabble rules and the letter distribution, are given. Some history and state-of-the-art of Computer Scrabble are commented. For instance, the generation methods of valid moves based on the data structure DAWG (Directed Acyclic Word Graph) and also the variant GADDAG are recalled. These methods are used by the state-of-the-art Scrabble engines Quackle and Maven. Then, the contributions of this thesis are presented. A Spanish lexicon for playing Scrabble has been built that is used by Heuri engines. From this construction, a detailed study and classification of Spanish irregular verbs has been provided. A novel Scrabble move generator based on anagrams has been designed and implemented, which has been shown to be faster than the GADDAG-based generator used in Quackle engine. This method is similar to the way Scrabble players look for a move, searching for anagrams and a spot to play on the board. Next, we address the evaluation of moves when playing Scrabble; the quality of your game depends on deciding what move should be played given a certain board and a rack with tiles. This decision was made initially by Heuri trying several heuristics which ended up with the construction of several engines. We give the explanation of the heuristics used in these engines, all of them based on probabilities. All these initial heuristic evaluation functions (up to six) do not use forward looking, they are static evaluators. They have shown, after testing, an increasing playing performance, which allow Heuri to beat (top-level) expert human players in Spanish, without the need of using sampling and simulation techniques. These heuristics mainly consider the possibility of achieving a bingo on the actual board, whereas Quackle used pre-calculated values (superleaves) regardless of the latter. Then, in order to improve the quality of play of Heuri even more, some additional engines are presented in which look ahead is employed. The HeuriSamp engine, which evaluates a 2-ply search, permits to obtain a defense value. The HeuriSim engine uses a 3-ply adversarial search tree; it contemplates the best first moves (according to Heuri sixth engine heuristic) from Player 1, then some replies to these moves (Player 2 moves) and then some replies to these replies (Player 1 moves). Finally, to improve these engines, opponent modeling is used; this technique makes predictions on some of the opponents' tiles based on the last play made by the opponent. We present results obtained by playing thousands of Heuri vs Heuri games, collecting important information: general statistics of Scrabble game, like a 16 point handicap of the second player, and word statistics in Spanish, like a list of the most frequently played bingos (words that use all 7 tiles of a player's rack). In addition, we present results of matches played by Heuri against top-level humans in Spanish and results obtained by massive playing of different Heuri engines against the Quackle engine in Spanish, French and English. All these match results demonstrate the championship level performance of the Heuri engines in the three languages, especially of the last developed engine that includes simulation and opponent modeling techniques. From here, conclusions of the thesis are drawn and work for the future is envisaged.La tesi comença introduint el joc del Scrabble, esmentant els programes d’ordinador de l’estat de l’art que juguen Scrabble, i presentant algunes característiques del motor de joc de Scrabble que s’ha desenvolupat anomenat Heuri. Es donen breus nocions de la Teoria de Jocs, junt amb la història d’alguns jocs en Intel·ligència Artificial; es presenten els algorismes fonamentals per jugar, així com els motors de joc de l’estat de l’art en diferents jocs i els algorismes que usen. Es comenta també la història i estat de l’art del Computer Scrabble. Es recorden els mètodes de generació de moviments vàlids basats en l’estructura de dades DAWG (Directed Acyclic Word Graph) i en la variant GADDAG, que són usats pels motors de joc de Scrabble Quackle i Maven. A continuació es presenten les contribucions de la tesi. S’ha construït un diccionari per jugar Scrabble en espanyol, el qual és usat per les diferentes versions del motor de joc Heuri. S’ha fet un estudi detallat i una classificació dels verbs irregulars en espanyol. S’ha dissenyat i implementat un nou generador de moviments de Scrabble basat en anagrames, que ha demostrat ser més ràpid que el generador basat en GADDAG usat al motor Quackle. Aquest mètode és similar a la manera en la que els jugadors de Scrabble cerquen un moviment, buscant anagrames i un lloc del tauler on col·locar-los. Seguidament, es tracta l’evacuació dels moviments quan es juga Scrabble; la qualitat del joc depèn de decidir quin moviment cal jugar donat un cert tauler i un faristol amb fitxes. En Heuri, inicialment, aquesta decisió es va prendre provant diferents heurístiques que van dur a la construcció de diversos motors. Donem l’explicació de les heurístiques usades en aquests motors, totes elles basades en probabilitats. Totes aquestes funcions d’avaluació heurística inicials (fins a sis) no miren cap endavant, fan avaluacions estàtiques. Han mostrat, després de ser provades, un rendiment creixent de nivell de joc, el que ha permès Heuri derrotar a jugadors humans experts de màxim nivell en espanyol, sense necessitat d’usar tècniques de mostreig i de simulació. Aquestes heurístiques consideren principalment la possibilitat d’aconseguir un bingo en el tauler actual, mentre que Quackle usa uns valors pre-calculats (superleaves) que no tenen en compte l’anterior. Amb l’objectiu de millorar la qualitat de joc de Heuri encara més, es presenten uns motors de joc addicionals que sí miren cap endavant. El motor HeuriSamp, que realitza una cerca 2-ply, permet obtenir un valor de defensa. El motor HeuriSim usa un arbre de cerca 3-ply; contempla els millors primers moviments (d’acord al sisè motor heurístic d’Heuri) del Jugador 1, després algunes respostes a aquests moviments (moviments del Jugador 2) i llavors algunes rèpliques a aquestes respostes (moviments del Jugador 1). Finalment, per a millorar aquests motors, es proposa usar modelatge d’oponents; aquesta tècnica realitza prediccions d’algunes de les fitxes de l’oponent basant-se en l’últim moviment jugat per aquest. Es presenten resultats obtinguts de jugar milers de partides d’Heuri contra Heuri, que recullen important informació: estadístiques generals del joc del Scrabble, com un handicap de 16 punts del segon jugador, i estadístiques de paraules en espanyol, com una llista dels bingos (paraules que usen les 7 fitxes del faristol d’un jugador) que es juguen més freqüentment. A més, es presenten resultats de partides jugades per Heuri contra jugadors humans de màxim nivell en espanyol i resultats obtinguts d'un gran nombre d’enfrontaments entre els diferents motors de joc d’Heuri contra el motor Quackle en espanyol, francès i anglès. Tots aquests resultats de partides jugades demostren el rendiment de nivell de campió dels motors d’Heuri en les tres llengües, especialment el de l’últim motor desenvolupat que inclou tècniques de de simulació i modelatge d'oponents. A partir d'aquí s'extreuen les conclusions de la tesi i es preveu treballar de cara al futur.Postprint (published version

    Proceedings of the Eindhoven FASTAR Days 2004 : Eindhoven, The Netherlands, September 3-4, 2004

    Get PDF
    The Eindhoven FASTAR Days (EFD) 2004 were organized by the Software Construction group of the Department of Mathematics and Computer Science at the Technische Universiteit Eindhoven. On September 3rd and 4th 2004, over thirty participants|hailing from the Czech Republic, Finland, France, The Netherlands, Poland and South Africa|gathered at the Department to attend the EFD. The EFD were organized in connection with the research on finite automata by the FASTAR Research Group, which is centered in Eindhoven and at the University of Pretoria, South Africa. FASTAR (Finite Automata Systems|Theoretical and Applied Research) is an in- ternational research group that aims to lead in all areas related to finite state systems. The work in FASTAR includes both core and applied parts of this field. The EFD therefore focused on the field of finite automata, with an emphasis on practical aspects and applications. Eighteen presentations, mostly on subjects within this field, were given, by researchers as well as students from participating universities and industrial research facilities. This report contains the proceedings of the conference, in the form of papers for twelve of the presentations at the EFD. Most of them were initially reviewed and distributed as handouts during the EFD. After the EFD took place, the papers were revised for publication in these proceedings. We would like to thank the participants for their attendance and presentations, making the EFD 2004 as successful as they were. Based on this success, it is our intention to make the EFD into a recurring event. Eindhoven, December 2004 Loek Cleophas Bruce W. Watso

    Proceedings of the Eindhoven FASTAR Days 2004 : Eindhoven, The Netherlands, September 3-4, 2004

    Get PDF
    The Eindhoven FASTAR Days (EFD) 2004 were organized by the Software Construction group of the Department of Mathematics and Computer Science at the Technische Universiteit Eindhoven. On September 3rd and 4th 2004, over thirty participants|hailing from the Czech Republic, Finland, France, The Netherlands, Poland and South Africa|gathered at the Department to attend the EFD. The EFD were organized in connection with the research on finite automata by the FASTAR Research Group, which is centered in Eindhoven and at the University of Pretoria, South Africa. FASTAR (Finite Automata Systems|Theoretical and Applied Research) is an in- ternational research group that aims to lead in all areas related to finite state systems. The work in FASTAR includes both core and applied parts of this field. The EFD therefore focused on the field of finite automata, with an emphasis on practical aspects and applications. Eighteen presentations, mostly on subjects within this field, were given, by researchers as well as students from participating universities and industrial research facilities. This report contains the proceedings of the conference, in the form of papers for twelve of the presentations at the EFD. Most of them were initially reviewed and distributed as handouts during the EFD. After the EFD took place, the papers were revised for publication in these proceedings. We would like to thank the participants for their attendance and presentations, making the EFD 2004 as successful as they were. Based on this success, it is our intention to make the EFD into a recurring event. Eindhoven, December 2004 Loek Cleophas Bruce W. Watso

    A hybrid deformation model of ventricular myocardium

    Get PDF

    The Physics of Open Ended Evolution

    Get PDF
    abstract: What makes living systems different than non-living ones? Unfortunately this question is impossible to answer, at least currently. Instead, we must face computationally tangible questions based on our current understanding of physics, computation, information, and biology. Yet we have few insights into how living systems might quantifiably differ from their non-living counterparts, as in a mathematical foundation to explain away our observations of biological evolution, emergence, innovation, and organization. The development of a theory of living systems, if at all possible, demands a mathematical understanding of how data generated by complex biological systems changes over time. In addition, this theory ought to be broad enough as to not be constrained to an Earth-based biochemistry. In this dissertation, the philosophy of studying living systems from the perspective of traditional physics is first explored as a motivating discussion for subsequent research. Traditionally, we have often thought of the physical world from a bottom-up approach: things happening on a smaller scale aggregate into things happening on a larger scale. In addition, the laws of physics are generally considered static over time. Research suggests that biological evolution may follow dynamic laws that (at least in part) change as a function of the state of the system. Of the three featured research projects, cellular automata (CA) are used as a model to study certain aspects of living systems in two of them. These aspects include self-reference, open-ended evolution, local physical universality, subjectivity, and information processing. Open-ended evolution and local physical universality are attributed to the vast amount of innovation observed throughout biological evolution. Biological systems may distinguish themselves in terms of information processing and storage, not outside the theory of computation. The final research project concretely explores real-world phenomenon by means of mapping dominance hierarchies in the evolution of video game strategies. Though the main question of how life differs from non-life remains unanswered, the mechanisms behind open-ended evolution and physical universality are revealed.Dissertation/ThesisDoctoral Dissertation Physics 201

    Fine-Grained Complexity of Analyzing Compressed Data: Quantifying Improvements over Decompress-And-Solve

    No full text
    Can we analyze data without decompressing it? As our data keeps growing, understanding the time complexity of problems on compressed inputs, rather than in convenient uncompressed forms, becomes more and more relevant. Suppose we are given a compression of size nn of data that originally has size NN, and we want to solve a problem with time complexity T()T(\cdot). The naive strategy of "decompress-and-solve" gives time T(N)T(N), whereas "the gold standard" is time T(n)T(n): to analyze the compression as efficiently as if the original data was small. We restrict our attention to data in the form of a string (text, files, genomes, etc.) and study the most ubiquitous tasks. While the challenge might seem to depend heavily on the specific compression scheme, most methods of practical relevance (Lempel-Ziv-family, dictionary methods, and others) can be unified under the elegant notion of Grammar Compressions. A vast literature, across many disciplines, established this as an influential notion for Algorithm design. We introduce a framework for proving (conditional) lower bounds in this field, allowing us to assess whether decompress-and-solve can be improved, and by how much. Our main results are: - The O(nNlogN/n)O(nN\sqrt{\log{N/n}}) bound for LCS and the O(min{NlogN,nM})O(\min\{N \log N, nM\}) bound for Pattern Matching with Wildcards are optimal up to No(1)N^{o(1)} factors, under the Strong Exponential Time Hypothesis. (Here, MM denotes the uncompressed length of the compressed pattern.) - Decompress-and-solve is essentially optimal for Context-Free Grammar Parsing and RNA Folding, under the kk-Clique conjecture. - We give an algorithm showing that decompress-and-solve is not optimal for Disjointness

    On cross-domain social semantic learning

    Get PDF
    Approximately 2.4 billion people are now connected to the Internet, generating massive amounts of data through laptops, mobile phones, sensors and other electronic devices or gadgets. Not surprisingly then, ninety percent of the world's digital data was created in the last two years. This massive explosion of data provides tremendous opportunity to study, model and improve conceptual and physical systems from which the data is produced. It also permits scientists to test pre-existing hypotheses in various fields with large scale experimental evidence. Thus, developing computational algorithms that automatically explores this data is the holy grail of the current generation of computer scientists. Making sense of this data algorithmically can be a complex process, specifically due to two reasons. Firstly, the data is generated by different devices, capturing different aspects of information and resides in different web resources/ platforms on the Internet. Therefore, even if two pieces of data bear singular conceptual similarity, their generation, format and domain of existence on the web can make them seem considerably dissimilar. Secondly, since humans are social creatures, the data often possesses inherent but murky correlations, primarily caused by the causal nature of direct or indirect social interactions. This drastically alters what algorithms must now achieve, necessitating intelligent comprehension of the underlying social nature and semantic contexts within the disparate domain data and a quantifiable way of transferring knowledge gained from one domain to another. Finally, the data is often encountered as a stream and not as static pages on the Internet. Therefore, we must learn, and re-learn as the stream propagates. The main objective of this dissertation is to develop learning algorithms that can identify specific patterns in one domain of data which can consequently augment predictive performance in another domain. The research explores existence of specific data domains which can function in synergy with another and more importantly, proposes models to quantify the synergetic information transfer among such domains. We include large-scale data from various domains in our study: social media data from Twitter, multimedia video data from YouTube, video search query data from Bing Videos, Natural Language search queries from the web, Internet resources in form of web logs (blogs) and spatio-temporal social trends from Twitter. Our work presents a series of solutions to address the key challenges in cross-domain learning, particularly in the field of social and semantic data. We propose the concept of bridging media from disparate sources by building a common latent topic space, which represents one of the first attempts toward answering sociological problems using cross-domain (social) media. This allows information transfer between social and non-social domains, fostering real-time socially relevant applications. We also engineer a concept network from the semantic web, called semNet, that can assist in identifying concept relations and modeling information granularity for robust natural language search. Further, by studying spatio-temporal patterns in this data, we can discover categorical concepts that stimulate collective attention within user groups.Includes bibliographical references (pages 210-214)

    Fine-Grained Complexity of Analyzing Compressed Data: Quantifying Improvements over Decompress-And-Solve

    Get PDF
    Can we analyze data without decompressing it? As our data keeps growing, understanding the time complexity of problems on compressed inputs, rather than in convenient uncompressed forms, becomes more and more relevant. Suppose we are given a compression of size nn of data that originally has size NN, and we want to solve a problem with time complexity T()T(\cdot). The naive strategy of "decompress-and-solve" gives time T(N)T(N), whereas "the gold standard" is time T(n)T(n): to analyze the compression as efficiently as if the original data was small. We restrict our attention to data in the form of a string (text, files, genomes, etc.) and study the most ubiquitous tasks. While the challenge might seem to depend heavily on the specific compression scheme, most methods of practical relevance (Lempel-Ziv-family, dictionary methods, and others) can be unified under the elegant notion of Grammar Compressions. A vast literature, across many disciplines, established this as an influential notion for Algorithm design. We introduce a framework for proving (conditional) lower bounds in this field, allowing us to assess whether decompress-and-solve can be improved, and by how much. Our main results are: - The O(nNlogN/n)O(nN\sqrt{\log{N/n}}) bound for LCS and the O(min{NlogN,nM})O(\min\{N \log N, nM\}) bound for Pattern Matching with Wildcards are optimal up to No(1)N^{o(1)} factors, under the Strong Exponential Time Hypothesis. (Here, MM denotes the uncompressed length of the compressed pattern.) - Decompress-and-solve is essentially optimal for Context-Free Grammar Parsing and RNA Folding, under the kk-Clique conjecture. - We give an algorithm showing that decompress-and-solve is not optimal for Disjointness.Comment: Presented at FOCS'17. Full version. 63 page
    corecore