1,041 research outputs found

    Independent AND-parallel implementation of narrowing

    Get PDF
    We present a parallel graph narrowing machine, which is used to implement a functional logic language on a shared memory multiprocessor. It is an extensión of an abstract machine for a purely functional language. The result is a programmed graph reduction machine which integrates the mechanisms of unification, backtracking, and independent and-parallelism. In the machine, the subexpressions of an expression can run in parallel. In the case of backtracking, the structure of an expression is used to avoid the reevaluation of subexpressions as far as possible. Deterministic computations are detected. Their results are maintained and need not be reevaluated after backtracking

    Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism

    Get PDF
    Can robots have significant moral status? This is an emerging topic of debate among roboticists and ethicists. This paper makes three contributions to this debate. First, it presents a theory – ‘ethical behaviourism’ – which holds that robots can have significant moral status if they are roughly performatively equivalent to other entities that have significant moral status. This theory is then defended from seven objections. Second, taking this theoretical position onboard, it is argued that the performative threshold that robots need to cross in order to be afforded significant moral status may not be that high and that they may soon cross it (if they haven’t done so already). Finally, the implications of this for our procreative duties to robots are considered, and it is argued that we may need to take seriously a duty of ‘procreative beneficence’ towards robots

    Extrapolate: generalizing counterexamples of functional test properties

    Get PDF
    This paper presents a new tool called Extrapolate that automatically generalizes counterexamples found by property-based testing in Haskell. Example applications show that generalized counterexamples can inform the programmer more fully and more immediately what characterises failures. Extrapolate is able to produce more general results than similar tools. Although it is intrinsically unsound, as reported generalizations are based on testing, it works well for examples drawn from previous published work in this area

    A model repair application scenario with PROVA

    Get PDF
    Dissertação de mestrado em Engenharia InformáticaModel-Driven Engineering (MDE) is a well known approach to software development that promotes the use of structured specifications, referred to as models, as the primary development artifact. One of the main challenges in MDE is to deal with a wide diversity of evolving models, some of which developed and maintained in parallel. In this setting, a particular point of attention is to manage the model inconsistencies that become inevitable, since it is too easy to make contradictory design decisions and hard to recognise them. In fact, during the development process, user updates will undoubtedly produce inconsistencies which must eventually be repaired. Tool support for this task is then essential in order to automate model repair, so consistency can be easily recovered. However, one of the main challenges in this domain is that for any given set of inconsistencies, there exists an infinite number of possible ways of fixing it. While much of researchers recognise this fact, the way in which this problem should be resolved is far from being agreed upon, and methods on how to detect and fix inconsistencies vary widely. In this master dissertation a comparison between different approaches is done and an application scenario is explored in close collaboration with industry. An off-the-shelf model repair tool leveraging the power of satisfiability (SAT) solving is put to test, while an incremental technique of complex repair trees is implemented and evaluated as a promising, yet very distinctive competitor.A engenharia orientada aos modelos (MDE), uma abordagem bem conhecida no desenvolvimento de software, promove a utilização de especificações estruturadas, denominadas modelos, como o artefacto primário de desenvolvimento. Um dos principais desafios neste domínio é lidar com a grande diversidade de modelos em evolução, muitas vezes desenvolvidos e mantidos em paralelo. Neste cenário é essencial gerir as inconsistências dos modelos, que se tornam inevitáveis uma vez que facilmente se tomam decisões contraditórias e de difícil reconhecimento. De facto, durante o processo de modelação, atualizações aos modelos por parte do utilizador irão sem dúvida produzir inconsistências que devem ser reparadas. Ferramentas que suportem este processo tornam-se essenciais para automatizar a reparação dos modelos, por forma a que a consistência seja facilmente recuperada. No entanto, para qualquer conjunto de inconsistências existe um número potencialmente infinito de possíveis formas de o corrigir, facto que revela ser um dos principais problemas neste domínio. Embora grande parte dos investigadores reconheça este desafio, a forma como esta problemática deve ser abordada está longe de reunir consenso e as soluções propostas variam muito. Nesta dissertação de mestrado é feita uma comparação entre diferentes abordagens à técnica de reparação de modelos, e um cenário de aplicação é explorado em estreita colaboração com a indústria. Uma ferramenta pronta a usar e que aproveita o poder do SAT solving é posta à prova, enquanto que uma outra técnica, incremental e baseada em complexas árvores de reparação, é implementada e avaliada como uma abordagem concorrente promissora e bastante distinta

    OPTIMIZING LEMPEL-ZIV FACTORIZATION FOR THE GPU ARCHITECTURE

    Get PDF
    Lossless data compression is used to reduce storage requirements, allowing for the relief of I/O channels and better utilization of bandwidth. The Lempel-Ziv lossless compression algorithms form the basis for many of the most commonly used compression schemes. General purpose computing on graphic processing units (GPGPUs) allows us to take advantage of the massively parallel nature of GPUs for computations other that their original purpose of rendering graphics. Our work targets the use of GPUs for general lossless data compression. Specifically, we developed and ported an algorithm that constructs the Lempel-Ziv factorization directly on the GPU. Our implementation bypasses the sequential nature of the LZ factorization and attempts to compute the factorization in parallel. By breaking down the LZ factorization into what we call the PLZ, we are able to outperform the fastest serial CPU implementations by up to 24x and perform comparatively to a parallel multicore CPU implementation. To achieve these speeds, our implementation outputted LZ factorizations that were on average only 0.01 percent greater than the optimal solution that what could be computed sequentially. We are also able to reevaluate the fastest GPU suffix array construction algorithm, which is needed to compute the LZ factorization. We are able to find speedups of up to 5x over the fastest CPU implementations
    • …
    corecore