194 research outputs found

    Logic programming in the context of multiparadigm programming: the Oz experience

    Full text link
    Oz is a multiparadigm language that supports logic programming as one of its major paradigms. A multiparadigm language is designed to support different programming paradigms (logic, functional, constraint, object-oriented, sequential, concurrent, etc.) with equal ease. This article has two goals: to give a tutorial of logic programming in Oz and to show how logic programming fits naturally into the wider context of multiparadigm programming. Our experience shows that there are two classes of problems, which we call algorithmic and search problems, for which logic programming can help formulate practical solutions. Algorithmic problems have known efficient algorithms. Search problems do not have known efficient algorithms but can be solved with search. The Oz support for logic programming targets these two problem classes specifically, using the concepts needed for each. This is in contrast to the Prolog approach, which targets both classes with one set of concepts, which results in less than optimal support for each class. To explain the essential difference between algorithmic and search programs, we define the Oz execution model. This model subsumes both concurrent logic programming (committed-choice-style) and search-based logic programming (Prolog-style). Instead of Horn clause syntax, Oz has a simple, fully compositional, higher-order syntax that accommodates the abilities of the language. We conclude with lessons learned from this work, a brief history of Oz, and many entry points into the Oz literature.Comment: 48 pages, to appear in the journal "Theory and Practice of Logic Programming

    Fast and accurate protein substructure searching with simulated annealing and GPUs

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Searching a database of protein structures for matches to a query structure, or occurrences of a structural motif, is an important task in structural biology and bioinformatics. While there are many existing methods for structural similarity searching, faster and more accurate approaches are still required, and few current methods are capable of substructure (motif) searching.</p> <p>Results</p> <p>We developed an improved heuristic for tableau-based protein structure and substructure searching using simulated annealing, that is as fast or faster and comparable in accuracy, with some widely used existing methods. Furthermore, we created a parallel implementation on a modern graphics processing unit (GPU).</p> <p>Conclusions</p> <p>The GPU implementation achieves up to 34 times speedup over the CPU implementation of tableau-based structure search with simulated annealing, making it one of the fastest available methods. To the best of our knowledge, this is the first application of a GPU to the protein structural search problem.</p

    Convergent types for shared memory

    Get PDF
    Dissertação de mestrado em Computer ScienceIt is well-known that consistency in shared memory concurrent programming comes with the price of degrading performance and scalability. Some of the existing solutions to this problem end up with high-level complexity and are not programmer friendly. We present a simple and well-defined approach to obtain relevant results for shared memory environments through relaxing synchronization. For that, we will look into Mergeable Data Types, data structures analogous to Conflict-Free Replicated Data Types but designed to perform in shared memory. CRDTs were the first formal approach engaging a solid theoretical study about eventual consistency on distributed systems, answering the CAP Theorem problem and providing high-availability. With CRDTs, updates are unsynchronized, and replicas eventually converge to a correct common state. However, CRDTs are not designed to perform in shared memory. In large-scale distributed systems the merge cost is negligible when compared to network mediated synchronization. Therefore, we have migrated the concept by developing the already existent Mergeable Data Types through formally defining a programming model that we named Global-Local View. Furthermore, we have created a portfolio of MDTs and demonstrated that in the appropriated scenarios we can largely benefit from the model.É bem sabido que para garantir coerência em programas concorrentes num ambiente de memória partilhada sacrifica-se performance e escalabilidade. Alguns dos métodos existentes para garantirem resultados significativos introduzem uma elevada complexidade e não são práticos. O nosso objetivo é o de garantir uma abordagem simples e bem definida de alcançar resultados notáveis em ambientes de memória partilhada, quando comparados com os métodos existentes, relaxando a coerência. Para tal, vamos analisar o conceito de Mergeable Data Type, estruturas análogas aos Conflict-Free Replicated Data Types mas concebidas para memória partilhada. CRDTs foram a primeira abordagem a desenvolver um estudo formal sobre eventual consistency, respondendo ao problema descrito no CAP Theorem e garantindo elevada disponibilidade. Com CRDTs os updates não são síncronos e as réplicas convergem eventualmente para um estado correto e comum. No entanto, não foram concebidos para atuar em memória partilhada. Em sistemas distribuídos de larga escala o custo da operação de merge é negligenciável quando comparado com a sincronização global. Portanto, migramos o conceito desenvolvendo os já existentes Mergeable Data Type através da criação de uma formalização de um modelo de programação ao qual chamamos de Global-Local View. Além do mais, criamos um portfolio de MDTs e demonstramos que nos cenários apropriados podemos beneficiar largamente do modelo

    Allyn, A Recommender Assistant for Online Bookstores

    Get PDF
    Treballs Finals del Grau d'Economia i Estadística. Doble titulació interuniversitària, Universitat de Barcelona i Universitat Politècnica de Catalunya. Curs: 2017-2018. Tutors: Esteban Vegas Lozano; Salvador Torra Porras(eng) Recommender Systems are information filtering engines used to estimate user preferences on items they have not seen: books, movies, restaurants or other things for which individuals have different tastes. Collaborative and Content-based Filtering have been the two popular memory-based methods to retrieve recommendations but these suffer from some limitations and might fail to provide effective recommendations. In this project we present several variations of Artificial Neural Networks, and in particular, of Autoencoders to generate model-based predictions for the users. We empirically show that a hybrid approach combining this model with other filtering engines provides a promising solution when compared to a standalone memory-based Collaborative Filtering Recommender. To wrap up the project, a chatbot connected to an e-commerce platform has been implemented so that, using Artificial Intelligence, it can retrieve recommendations to users.(cat) Els Sistemes de Recomanació són motors de filtratge de la informació que permeten estimar les preferències dels usuaris sobre ítems que no coneixen a priori. Aquests poden ser des de llibres o películes fins a restaurants o qualsevol altre element en el qual els usuaris puguin presentar gustos diferenciats. El present projecte es centra en la recomanació de llibres. Es comença a parlar dels Sistemes de Recomanació al voltant de 1990 però és durant la darrera dècada amb el boom de la informació i les dades massives que comencen a tenir major repercussió. Tradicionalment, els mètodes utilitzats en aquests sistemes eren dos: el Filtratge Col·laboratiu i el Filtratge basat en Contingut. Tanmateix, ambdós són mètodes basats en memòria, fet que suposa diverses limitacions que poden arribar a portar a no propocionar recomanacions de manera eficient o precisa. En aquest projecte es presenten diverses variacions de Xarxes Neuronals Artificials per a generar prediccions basades en models. En concret, es desenvolupen Autoencoders, una estructura particular d’aquestes que es caracteritza per tenir la mateixa entrada i sortida. D’aquesta manera, els Autoencoders aprenen a descobrir els patrons subjacents en dades molt esparses. Tots aquests models s’implementen utilitzant dos marcs de programació: Keras i Tensorflow per a R. Es mostra empíricament que un enfocament híbrid que combina aquests models amb altres motors de filtratge proporciona una solució prometedora en comparació amb un recomanador que utilitza exclusivament Filtratge Col·laboratiu. D’altra banda, s’analitzen els sistemes de recomanació des d’un punt de vista econòmic, emfatitzant especialment el seu impacte en empreses de comerç electrònic. S’analitzen els sistemes de recomanació desenvolupats per quatre empreses pioneres del sector així com les tecnologies front-end en què s’implementen. En concret, s’analitza el seu ús en chatbots, programes informàtics de missatgeria instantània que, a través de la Intel·ligència Artificial simulen la conversa humana. Per tancar el projecte, es desenvolupa un chatbot propi implementat en una aplicació de missatgeria instantània i connectat a una empresa de comerç electrònic, capaç de donar recomanacions als usuaris fent ús del sistema de recomanació híbrid dut a terme

    Data structures

    Get PDF
    We discuss data structures and their methods of analysis. In particular, we treat the unweighted and weighted dictionary problem, self-organizing data structures, persistent data structures, the union-find-split problem, priority queues, the nearest common ancestor problem, the selection and merging problem, and dynamization techniques. The methods of analysis are worst, average and amortized case

    Using neural networks based on epigenomic maps for predicting the transcriptional regulation measured by CRISPR/Cas9

    Full text link
    [EN] Because of the great impact that the genomic editing with CRISPR/CAS9 has had in the recent years, and the great advances that it brings to biotechnology a great need of information has arisen. However researches struggle to find a definate pattern with these experiments making a very long process of trial and error to find an optimal solution for a particular experiment. With this project we intend to optimize the genomic edition with the newest advance CRISPR/Cas9, to find the optimal insertion site we design a mathematical model based on neural networks. During this process we had to deal with huge amount of information from the genome so we had to develop a way to filter and handle it efficiently. For this project we are going to focus in Arabidopsis Thaliana which is a very common plant in genomic edition and has many resources available online.Barberá Mourelle, A. (2016). Using neural networks based on epigenomic maps for predicting the transcriptional regulation measured by CRISPR/Cas9. http://hdl.handle.net/10251/69318.TFG
    corecore