48 research outputs found

    Efficient Pattern Matching on Binary Strings

    Full text link
    The binary string matching problem consists in finding all the occurrences of a pattern in a text where both strings are built on a binary alphabet. This is an interesting problem in computer science, since binary data are omnipresent in telecom and computer network applications. Moreover the problem finds applications also in the field of image processing and in pattern matching on compressed texts. Recently it has been shown that adaptations of classical exact string matching algorithms are not very efficient on binary data. In this paper we present two efficient algorithms for the problem adapted to completely avoid any reference to bits allowing to process pattern and text byte by byte. Experimental results show that the new algorithms outperform existing solutions in most cases.Comment: 12 page

    The Many Qualities of a New Directly Accessible Compression Scheme

    Full text link
    We present a new variable-length computation-friendly encoding scheme, named SFDC (Succinct Format with Direct aCcesibility), that supports direct and fast accessibility to any element of the compressed sequence and achieves compression ratios often higher than those offered by other solutions in the literature. The SFDC scheme provides a flexible and simple representation geared towards either practical efficiency or compression ratios, as required. For a text of length nn over an alphabet of size σ\sigma and a fixed parameter λ\lambda, the access time of the proposed encoding is proportional to the length of the character's code-word, plus an expected O((Fσλ+33)/Fσ+1)\mathcal{O}((F_{\sigma - \lambda + 3} - 3)/F_{\sigma+1}) overhead, where FjF_j is the jj-th number of the Fibonacci sequence. In the overall it uses N+O(n(λ(Fσ+33)/Fσ+1))=N+O(n)N+\mathcal{O}\big(n \left(\lambda - (F_{\sigma+3}-3)/F_{\sigma+1}\big) \right) = N + \mathcal{O}(n) bits, where NN is the length of the encoded string. Experimental results show that the performance of our scheme is, in some respects, comparable with the performance of DACs and Wavelet Tees, which are among of the most efficient schemes. In addition our scheme is configured as a \emph{computation-friendly compression} scheme, as it counts several features that make it very effective in text processing tasks. In the string matching problem, that we take as a case study, we experimentally prove that the new scheme enables results that are up to 29 times faster than standard string-matching techniques on plain texts.Comment: 33 page

    Generate fuzzy string-matching to build self attention on Indonesian medical-chatbot

    Get PDF
    Chatbot is a form of interactive conversation that requires quick and precise answers. The process of identifying answers to users’ questions involves string matching and handling incorrect spelling. Therefore, a system that can independently predict and correct letters is highly necessary. The approach used to address this issue is to enhance the fuzzy string-matching method by incorporating several features for self-attention. The combination of fuzzy string-matching methods employed includes Jaro Winkler distance + Levenshtein Damerau distance and Damerau Levenshtein + Rabin Carp. The reason for using this combination is their ability not only to match strings but also to correct word typing errors. This research contributes by developing a self-attention mechanism through a modified fuzzy string-matching model with enhanced word feature structures. The goal is to utilize this self-attention mechanism in constructing the Indonesian medical bidirectional encoder representations from transformers (IM-BERT). This will serve as a foundation for additional features to provide accurate answers in the Indonesian medical question and answer system, achieving an exact match of 85.7% and an F1-score of 87.6%

    Algorithm engineering : string processing

    Get PDF
    The string matching problem has attracted a lot of interest throughout the history of computer science, and is crucial to the computing industry. The theoretical community in Computer Science has a developed a rich literature in the design and analysis of string matching algorithms. To date, most of this work has been based on the asymptotic analysis of the algorithms. This analysis rarely tell us how the algorithm will perform in practice and considerable experimentation and fine-tuning is typically required to get the most out of a theoretical idea. In this thesis, promising string matching algorithms discovered by the theoretical community are implemented, tested and refined to the point where they can be usefully applied in practice. In the course of this work we have presented the following new algorithms. We prove that the time complexity of the new algorithms, for the average case is linear. We also compared the new algorithms with the existing algorithms by experimentation. " We implemented the existing one dimensional string matching algorithms for English texts. From the findings of the experimental results we identified the best two algorithms. We combined these two algorithms and introduce a new algorithm. " We developed a new two dimensional string matching algorithm. This algorithm uses the structure of the pattern to reduce the number of comparisons required to search for the pattern. " We described a method for efficiently storing text. Although this reduces the size of the storage space, it is not a compression method as in the literature. Our aim is to improve both space and time taken by a string matching algorithm. Our new algorithm searches for patterns in the efficiently stored text without decompressing the text. " We illustrated that by pre-processing the text we can improve the speed of the string matching algorithm when we search for a large number of patterns in a given text. " We proposed a hardware solution for searching in an efficiently stored DNA text

    Representation and Exploitation of Event Sequences

    Get PDF
    Programa Oficial de Doutoramento en Computación . 5009V01[Abstract] The Ten Commandments, the thirty best smartphones in the market and the five most wanted people by the FBI. Our life is ruled by sequences: thought sequences, number sequences, event sequences. . . a history book is nothing more than a compilation of events and our favorite film is just a sequence of scenes. All of them have something in common, it is possible to acquire relevant information from them. Frequently, by accumulating some data from the elements of each sequence we may access hidden information (e.g. the passengers transported by a bus on a journey is the sum of the passengers who got on in the sequence of stops made); other times, reordering the elements by any of their characteristics facilitates the access to the elements of interest (e.g. the publication of books in 2019 can be ordered chronologically, by author, by literary genre or even by a combination of characteristics); but it will always be sought to store them in the smallest space possible. Thus, this thesis proposes technological solutions for the storage and subsequent processing of events, focusing specifically on three fundamental aspects that can be found in any application that needs to manage them: compressed and dynamic storage, aggregation or accumulation of elements of the sequence and element sequence reordering by their different characteristics or dimensions. The first contribution of this work is a compact structure for the dynamic compression of event sequences. This structure allows any sequence to be compressed in a single pass, that is, it is capable of compressing in real time as elements arrive. This contribution is a milestone in the world of compression since, to date, this is the first proposal for a variable-to-variable dynamic compressor for general purpose. Regarding aggregation, a data warehouse-like proposal is presented capable of storing information on any characteristic of the events in a sequence in an aggregated, compact and accessible way. Following the philosophy of current data warehouses, we avoid repeating cumulative operations and speed up aggregate queries by preprocessing the information and keeping it in this separate structure. Finally, this thesis addresses the problem of indexing event sequences considering their different characteristics and possible reorderings. A new approach for simultaneously keeping the elements of a sequence ordered by different characteristics is presented through compact structures. Thus, it is possible to consult the information and perform operations on the elements of the sequence using any possible rearrangement in a simple and efficient way.[Resumen] Los diez mandamientos, los treinta mejores móviles del mercado y las cinco personas más buscadas por el FBI. Nuestra vida está gobernada por secuencias: secuencias de pensamientos, secuencias de números, secuencias de eventos. . . un libro de historia no es más que una sucesión de eventos y nuestra película favorita no es sino una secuencia de escenas. Todas ellas tienen algo en común, de todas podemos extraer información relevante. A veces, al acumular algún dato de los elementos de cada secuencia accedemos a información oculta (p. ej. los viajeros transportados por un autobús en un trayecto es la suma de los pasajeros que se subieron en la secuencia de paradas realizadas); otras veces, la reordenación de los elementos por alguna de sus características facilita el acceso a los elementos de interés (p. ej. la publicación de obras literarias en 2019 puede ordenarse cronológicamente, por autor, por género literario o incluso por una combinación de características); pero siempre se buscará almacenarlas en el espacio más reducido posible sin renunciar a su contenido. Por ello, esta tesis propone soluciones tecnológicas para el almacenamiento y posterior procesamiento de secuencias, centrándose concretamente en tres aspectos fundamentales que se pueden encontrar en cualquier aplicación que precise gestionarlas: el almacenamiento comprimido y dinámico, la agregación o acumulación de algún dato sobre los elementos de la secuencia y la reordenación de los elementos de la secuencia por sus diferentes características o dimensiones. La primera contribución de este trabajo es una estructura compacta para la compresión dinámica de secuencias. Esta estructura permite comprimir cualquier secuencia en una sola pasada, es decir, es capaz de comprimir en tiempo real a medida que llegan los elementos de la secuencia. Esta aportación es un hito en el mundo de la compresión ya que, hasta la fecha, es la primera propuesta de un compresor dinámico “variable to variable” de carácter general. En cuanto a la agregación, se presenta una propuesta de almacén de datos capaz de guardar la información acumulada sobre alguna característica de los eventos de la secuencia de modo compacto y fácilmente accesible. Siguiendo la filosofía de los actuales almacenes de datos, el objetivo es evitar repetir operaciones de acumulación y agilizar las consultas agregadas mediante el preprocesado de la información manteniéndola en esta estructura. Por último, esta tesis aborda el problema de la indexación de secuencias de eventos considerando sus diferentes características y posibles reordenaciones. Se presenta una nueva forma de mantener simultáneamente ordenados los elementos de una secuencia por diferentes características a través de estructuras compactas. Así se permite consultar la información y realizar operaciones sobre los elementos de la secuencia usando cualquier posible ordenación de una manera sencilla y eficiente

    Efficient compression of large repetitive strings

    Get PDF
    When is comes to managing large volumes of data, general-purpose compressors such as gzip are ubiquitous. They are fast, practical and available on every modern platform from standard desktops to mobile devices. These tools exploit local redundancy in a text using a fixed-size sliding window. This window is usually very small relative to the text, however, in principle it can be as large as available memory. The window acts as a dictionary. Compression is achieved by replacing substrings with pointers to previous occurrences found in the dictionary. This type of algorithm becomes problematic when dealing with collections that are larger than physical memory, as it fails to capture any non-local redundancy, that is, repetition that occurs outside of its search window. With rapid growth in the already enormous amount of data we store and process there is a pressing need for improving compression effectiveness, reducing both storage requirements and decompression costs. However, many systems still use general-purpose compression tools on large highly repetitive data collections. In this thesis we focus on addressing this issue. We explore compression in a variety of domains where large volumes of data need to be stored and accessed, and general-purpose compression tools are cannon. First we discuss our work on web corpus compression, then we discuss the implementation of a practical index for repetitive texts that gives strong theoretical bounds in terms of size and access, and finally, we discuss our work on compression of high-throughput sequencing reads. We show that in all cases, our new methods improve on current techniques in both run-time and compression effectiveness, and provide important functionality such as fast decoding and random access

    Online Modeling and Tuning of Parallel Stream Processing Systems

    Get PDF
    Writing performant computer programs is hard. Code for high performance applications is profiled, tweaked, and re-factored for months specifically for the hardware for which it is to run. Consumer application code doesn\u27t get the benefit of endless massaging that benefits high performance code, even though heterogeneous processor environments are beginning to resemble those in more performance oriented arenas. This thesis offers a path to performant, parallel code (through stream processing) which is tuned online and automatically adapts to the environment it is given. This approach has the potential to reduce the tuning costs associated with high performance code and brings the benefit of performance tuning to consumer applications where otherwise it would be cost prohibitive. This thesis introduces a stream processing library and multiple techniques to enable its online modeling and tuning. Stream processing (also termed data-flow programming) is a compute paradigm that views an application as a set of logical kernels connected via communications links or streams. Stream processing is increasingly used by computational-x and x-informatics fields (e.g., biology, astrophysics) where the focus is on safe and fast parallelization of specific big-data applications. A major advantage of stream processing is that it enables parallelization without necessitating manual end-user management of non-deterministic behavior often characteristic of more traditional parallel processing methods. Many big-data and high performance applications involve high throughput processing, necessitating usage of many parallel compute kernels on several compute cores. Optimizing the orchestration of kernels has been the focus of much theoretical and empirical modeling work. Purely theoretical parallel programming models can fail when the assumptions implicit within the model are mis-matched with reality (i.e., the model is incorrectly applied). Often it is unclear if the assumptions are actually being met, even when verified under controlled conditions. Full empirical optimization solves this problem by extensively searching the range of likely configurations under native operating conditions. This, however, is expensive in both time and energy. For large, massively parallel systems, even deciding which modeling paradigm to use is often prohibitively expensive and unfortunately transient (with workload and hardware). In an ideal world, a parallel run-time will re-optimize an application continuously to match its environment, with little additional overhead. This work presents methods aimed at doing just that through low overhead instrumentation, modeling, and optimization. Online optimization provides a good trade-off between static optimization and online heuristics. To enable online optimization, modeling decisions must be fast and relatively accurate. Online modeling and optimization of a stream processing system first requires the existence of a stream processing framework that is amenable to the intended type of dynamic manipulation. To fill this void, we developed the RaftLib C++ template library, which enables usage of the stream processing paradigm for C++ applications (it is the run-time which is the basis of almost all the work within this dissertation). An application topology is specified by the user, however almost everything else is optimizable by the run-time. RaftLib takes advantage of the knowledge gained during the design of several prior streaming languages (notably Auto-Pipe). The resultant framework enables online migration of tasks, auto-parallelization, online buffer-reallocation, and other useful dynamic behaviors that were not available in many previous stream processing systems. Several benchmark applications have been designed to assess the performance gains through our approaches and compare performance to other leading stream processing frameworks. Information is essential to any modeling task, to that end a low-overhead instrumentation framework has been developed which is both dynamic and adaptive. Discovering a fast and relatively optimal configuration for a stream processing application often necessitates solving for buffer sizes within a finite capacity queueing network. We show that a generalized gain/loss network flow model can bootstrap the process under certain conditions. Any modeling effort, requires that a model be selected; often a highly manual task, involving many expensive operations. This dissertation demonstrates that machine learning methods (such as a support vector machine) can successfully select models at run-time for a streaming application. The full set of approaches are incorporated into the open source RaftLib framework
    corecore